Factor decomposed generative adversarial networks for text-to-image synthesis

J Li, X Liu, L Zheng - arXiv preprint arXiv:2303.13821, 2023 - arxiv.org
J Li, X Liu, L Zheng
arXiv preprint arXiv:2303.13821, 2023arxiv.org
Prior works about text-to-image synthesis typically concatenated the sentence embedding
with the noise vector, while the sentence embedding and the noise vector are two different
factors, which control the different aspects of the generation. Simply concatenating them will
entangle the latent factors and encumber the generative model. In this paper, we attempt to
decompose these two factors and propose Factor Decomposed Generative Adversarial
Networks~(FDGAN). To achieve this, we firstly generate images from the noise vector and …
Prior works about text-to-image synthesis typically concatenated the sentence embedding with the noise vector, while the sentence embedding and the noise vector are two different factors, which control the different aspects of the generation. Simply concatenating them will entangle the latent factors and encumber the generative model. In this paper, we attempt to decompose these two factors and propose Factor Decomposed Generative Adversarial Networks~(FDGAN). To achieve this, we firstly generate images from the noise vector and then apply the sentence embedding in the normalization layer for both generator and discriminators. We also design an additive norm layer to align and fuse the text-image features. The experimental results show that decomposing the noise and the sentence embedding can disentangle latent factors in text-to-image synthesis, and make the generative model more efficient. Compared with the baseline, FDGAN can achieve better performance, while fewer parameters are used.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果