Swinv2-Imagen: Hierarchical Vision Transformer Diffusion Models for Text-to-Image Generation

Date
2022-10-19
Authors
Li, R
Li, W
Yang, Y
Wei, H
Jiang, J
Bai, Q
Supervisor
Item type
Degree name
Journal Title
Journal ISSN
Volume Title
Publisher
arXiv
Abstract

Recently, diffusion models have been proven to perform remarkably well in text-to-image synthesis tasks in a number of studies, immediately presenting new study opportunities for image generation. Google's Imagen follows this research trend and outperforms DALLE2 as the best model for text-to-image generation. However, Imagen merely uses a T5 language model for text processing, which cannot ensure learning the semantic information of the text. Furthermore, the Efficient UNet leveraged by Imagen is not the best choice in image processing. To address these issues, we propose the Swinv2-Imagen, a novel text-to-image diffusion model based on a Hierarchical Visual Transformer and a Scene Graph incorporating a semantic layout. In the proposed model, the feature vectors of entities and relationships are extracted and involved in the diffusion model, effectively improving the quality of generated images. On top of that, we also introduce a Swin-Transformer-based UNet architecture, called Swinv2-Unet, which can address the problems stemming from the CNN convolution operations. Extensive experiments are conducted to evaluate the performance of the proposed model by using three real-world datasets, i.e., MSCOCO, CUB and MM-CelebA-HQ. The experimental results show that the proposed Swinv2-Imagen model outperforms several popular state-of-the-art methods.

Description
Keywords
Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); FOS: Computer and information sciences; FOS: Computer and information sciences; I.4.0, 94A08
Source
arXiv:2210.09549 [cs.CV]
Rights statement
Creative Commons Attribution 4.0 International