Swinv2-Imagen: Hierarchical Vision Transformer Diffusion Models for Text-to-Image Generation

aut.researcherLi, Weihua
dc.contributor.authorLi, Ren_NZ
dc.contributor.authorLi, Wen_NZ
dc.contributor.authorYang, Yen_NZ
dc.contributor.authorWei, Hen_NZ
dc.contributor.authorJiang, Jen_NZ
dc.contributor.authorBai, Qen_NZ
dc.date.accessioned2022-10-21T01:30:36Z
dc.date.available2022-10-21T01:30:36Z
dc.date.copyright2022-10-19en_NZ
dc.date.issued2022-10-19en_NZ
dc.description.abstractRecently, diffusion models have been proven to perform remarkably well in text-to-image synthesis tasks in a number of studies, immediately presenting new study opportunities for image generation. Google's Imagen follows this research trend and outperforms DALLE2 as the best model for text-to-image generation. However, Imagen merely uses a T5 language model for text processing, which cannot ensure learning the semantic information of the text. Furthermore, the Efficient UNet leveraged by Imagen is not the best choice in image processing. To address these issues, we propose the Swinv2-Imagen, a novel text-to-image diffusion model based on a Hierarchical Visual Transformer and a Scene Graph incorporating a semantic layout. In the proposed model, the feature vectors of entities and relationships are extracted and involved in the diffusion model, effectively improving the quality of generated images. On top of that, we also introduce a Swin-Transformer-based UNet architecture, called Swinv2-Unet, which can address the problems stemming from the CNN convolution operations. Extensive experiments are conducted to evaluate the performance of the proposed model by using three real-world datasets, i.e., MSCOCO, CUB and MM-CelebA-HQ. The experimental results show that the proposed Swinv2-Imagen model outperforms several popular state-of-the-art methods.
dc.identifier.citationarXiv:2210.09549 [cs.CV]
dc.identifier.doi10.48550/arXiv.2210.09549en_NZ
dc.identifier.urihttps://hdl.handle.net/10292/15539
dc.publisherarXiv
dc.relation.urihttps://arxiv.org/abs/2210.09549
dc.rightsCreative Commons Attribution 4.0 International
dc.rights.accessrightsOpenAccessen_NZ
dc.subjectComputer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); FOS: Computer and information sciences; FOS: Computer and information sciences; I.4.0, 94A08
dc.titleSwinv2-Imagen: Hierarchical Vision Transformer Diffusion Models for Text-to-Image Generationen_NZ
pubs.elements-id481886
pubs.organisational-data/AUT
pubs.organisational-data/AUT/Faculty of Design & Creative Technologies
pubs.organisational-data/AUT/Faculty of Design & Creative Technologies/School of Engineering, Computer & Mathematical Sciences
pubs.organisational-data/AUT/PBRF
pubs.organisational-data/AUT/PBRF/PBRF Design and Creative Technologies
pubs.organisational-data/AUT/PBRF/PBRF Design and Creative Technologies/ECMS PBRF 2018
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2210.09549.pdf
Size:
15.72 MB
Format:
Adobe Portable Document Format
Description:
Journal article
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
AUT Grant of Licence for Tuwhera Jun 2021.pdf
Size:
360.95 KB
Format:
Adobe Portable Document Format
Description: