Image synthesis is currently one of the most addressed image processing topic in computer vision and deep learning fields of study. Researchers have tackled this problem focusing their efforts on its several challenging problems, e.g. image quality and size, domain and pose changing, architecture of the networks, and so on. Above all, producing images belonging to different domains by using a single architecture is a very relevant goal for image generation. In fact, a single multi-domain network would allow greater flexibility and robustness in the image synthesis task than other approaches. This paper proposes a novel architecture and a training algorithm, which are able to produce multi-domain outputs using a single network. A small portion of a dataset is intentionally used, and there are no hard-coded labels (or classes). This is achieved by combining a conditional Generative Adversarial Network (cGAN) for image generation and a Meta-Learning algorithm for domain switch, and we called our approach MetalGAN. The approach has proved to be appropriate for solving the multi-domain label-less problem and it is validated on facial attribute transfer, using CelebA dataset.
MetalGAN: Multi-domain label-less image synthesis using cGANs and meta-learning / Fontanini, T.; Iotti, E.; Donati, L.; Prati, A.. - In: NEURAL NETWORKS. - ISSN 0893-6080. - 131:(2020), pp. 185-200. [10.1016/j.neunet.2020.07.031]
MetalGAN: Multi-domain label-less image synthesis using cGANs and meta-learning
Fontanini T.;Iotti E.;Donati L.;Prati A.
2020-01-01
Abstract
Image synthesis is currently one of the most addressed image processing topic in computer vision and deep learning fields of study. Researchers have tackled this problem focusing their efforts on its several challenging problems, e.g. image quality and size, domain and pose changing, architecture of the networks, and so on. Above all, producing images belonging to different domains by using a single architecture is a very relevant goal for image generation. In fact, a single multi-domain network would allow greater flexibility and robustness in the image synthesis task than other approaches. This paper proposes a novel architecture and a training algorithm, which are able to produce multi-domain outputs using a single network. A small portion of a dataset is intentionally used, and there are no hard-coded labels (or classes). This is achieved by combining a conditional Generative Adversarial Network (cGAN) for image generation and a Meta-Learning algorithm for domain switch, and we called our approach MetalGAN. The approach has proved to be appropriate for solving the multi-domain label-less problem and it is validated on facial attribute transfer, using CelebA dataset.File | Dimensione | Formato | |
---|---|---|---|
01_2020_NeuralNetworks.pdf
solo utenti autorizzati
Tipologia:
Versione (PDF) editoriale
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
9 MB
Formato
Adobe PDF
|
9 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
MetalGAN.pdf
accesso aperto
Tipologia:
Documento in Post-print
Licenza:
Creative commons
Dimensione
65.06 MB
Formato
Adobe PDF
|
65.06 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.