Generating Automated Vehicle Testing Scenarios Using Generative Adversarial Network

Date
2023
Authors
Xue, Longjiang
Supervisor
Ma, Jing
Lai, Edmund M-K
Item type
Thesis
Degree name
Master of Computer and Information Sciences
Journal Title
Journal ISSN
Volume Title
Publisher
Auckland University of Technology
Abstract

In the field of autonomous driving, generating semantic segmentation images from road scene images is of utmost importance. This allows researchers to test and validate autonomous vehicle functions and algorithms using synthetic traffic scenes without actually driving on public roads. Generating traffic scenes using existing images has emerged as a significant research area, with generative adversarial networks (GANs) being a popular choice for image-to-image conversion in computer vision.

In this thesis, we have utilized both traditional conditional adversarial networks, specifically the pix2pix model, as well as the GANformer method, which is a novel and efficient type of transformer, for generating new images. We have then proposed a new methodology by combining the GANformer and pix2pix models to perform performance tests.

To train our models, we have used the Cityscapes dataset, which consists of 5,000 high-quality road scene images and their corresponding manually annotated semantic segmentation images. This dataset serves as a valuable resource for training and evaluating the performance of our proposed models.

The experimental results show that the third methodology which combines pix2pix and GANformer outperforms the other two methods in generating semantic segmentation 2 images with higher accuracy and less ambiguity. This experiment also demonstrates the different performances of these three methods when converting road scene images into semantic segmentation images.

The findings from our experiments highlight that incorporating pix2pix into GANformer results in higher accuracy and better performance in the context of traffic scenes. This has significant implications for areas such as autonomous driving testing and validation, where accurate and reliable results are crucial. The use of our proposed methodology has the potential to improve the accuracy and reliability of generating semantic segmentation images for traffic scenes, contributing to the development of more robust and efficient autonomous vehicle algorithms and functions.

Description
Keywords
Autonomous Driving , Generative Adversarial Network , GANformer , pix2Pix , Traffic scene image
Source
DOI
Publisher's version
Rights statement
Collections