The necessity to use very large datasets in order to train Generative Adversarial Networks (GANs) has limited their use in cases where the data at disposal are scarce or poorly labelled (e.g., in real life applications). Recently, meta-learning proved that it can help solving effectively few-shot classification problems, but its use in noise-to-image generation was only partially explored. In this paper, we took the first step into applying a meta-learning algorithm (Reptile), to the discriminator of a GAN and to a mapping network in order to optimize the random noise z to guide the generator network into producing images belonging to specific classes. By doing so, we prove that the latent space distribution is crucial for the generation of sharp samples when few training data are at disposal and also managed to generate samples of previously unseen classes just by optimizing the latent space without changing any parameter in the generator network. Finally, we show several experiments with two widely used datasets: MNIST and Omniglot.
Towards Latent Space Optimization of GANs Using Meta-Learning / Fontanini, T.; Pratico, C.; Prati, A.. - 13231:(2022), pp. 646-657. (Intervento presentato al convegno 21st International Conference on Image Analysis and Processing, ICIAP 2022 tenutosi a ita nel 2022) [10.1007/978-3-031-06427-2_54].
Towards Latent Space Optimization of GANs Using Meta-Learning
Fontanini T.;Prati A.
2022-01-01
Abstract
The necessity to use very large datasets in order to train Generative Adversarial Networks (GANs) has limited their use in cases where the data at disposal are scarce or poorly labelled (e.g., in real life applications). Recently, meta-learning proved that it can help solving effectively few-shot classification problems, but its use in noise-to-image generation was only partially explored. In this paper, we took the first step into applying a meta-learning algorithm (Reptile), to the discriminator of a GAN and to a mapping network in order to optimize the random noise z to guide the generator network into producing images belonging to specific classes. By doing so, we prove that the latent space distribution is crucial for the generation of sharp samples when few training data are at disposal and also managed to generate samples of previously unseen classes just by optimizing the latent space without changing any parameter in the generator network. Finally, we show several experiments with two widely used datasets: MNIST and Omniglot.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.