Abstract
The deep learning models for image-to-image translation established unpaired translation learning. In these models, we focused on developing an N-to-N domains translation model that outputs N-style images from a single content image only by using a single trained model.
When we want to train the model based on existing N-to-N domains translation models to be robust for extreme appearance changing, the model tends to consume a lot of computational costs because these models reuse generators to obtain cycle consistency loss (content loss). Cycle consistency loss is the difference of spatial features between content image and translated one, and this loss is important to established unpaired image-to-image translation learning.
Our study proposed a new N-to-N domains translation model named “Multi-CartoonGAN” which has the potential to learn diverse and large feature mappings within only a small number of training parameters. Multi-CartoonGAN is the developed model of one-to-one domains translation model CartoonGAN. As CartoonGAN succeeded in reducing training parameters by utilizing a pre-trained VGG net to calculate content loss, we developed the model as an N-to-N domains translation model. About the solution of extreme appearance translation, we implemented a new adaptive normalization function: Switch CAdaLIN.