Generating Automated Vehicle Testing Scenarios Using Generative Adversarial Network

dc.contributor.advisorMa, Jing
dc.contributor.advisorLai, Edmund M-K
dc.contributor.authorXue, Longjiang
dc.date.accessioned2023-06-20T22:36:47Z
dc.date.available2023-06-20T22:36:47Z
dc.date.issued2023
dc.description.abstractIn the field of autonomous driving, generating semantic segmentation images from road scene images is of utmost importance. This allows researchers to test and validate autonomous vehicle functions and algorithms using synthetic traffic scenes without actually driving on public roads. Generating traffic scenes using existing images has emerged as a significant research area, with generative adversarial networks (GANs) being a popular choice for image-to-image conversion in computer vision. In this thesis, we have utilized both traditional conditional adversarial networks, specifically the pix2pix model, as well as the GANformer method, which is a novel and efficient type of transformer, for generating new images. We have then proposed a new methodology by combining the GANformer and pix2pix models to perform performance tests. To train our models, we have used the Cityscapes dataset, which consists of 5,000 high-quality road scene images and their corresponding manually annotated semantic segmentation images. This dataset serves as a valuable resource for training and evaluating the performance of our proposed models. The experimental results show that the third methodology which combines pix2pix and GANformer outperforms the other two methods in generating semantic segmentation 2 images with higher accuracy and less ambiguity. This experiment also demonstrates the different performances of these three methods when converting road scene images into semantic segmentation images. The findings from our experiments highlight that incorporating pix2pix into GANformer results in higher accuracy and better performance in the context of traffic scenes. This has significant implications for areas such as autonomous driving testing and validation, where accurate and reliable results are crucial. The use of our proposed methodology has the potential to improve the accuracy and reliability of generating semantic segmentation images for traffic scenes, contributing to the development of more robust and efficient autonomous vehicle algorithms and functions.
dc.identifier.urihttps://hdl.handle.net/10292/16300
dc.language.isoen
dc.publisherAuckland University of Technology
dc.rights.accessrightsOpenAccess
dc.subjectAutonomous Driving
dc.subjectGenerative Adversarial Network
dc.subjectGANformer
dc.subjectpix2Pix
dc.subjectTraffic scene image
dc.titleGenerating Automated Vehicle Testing Scenarios Using Generative Adversarial Network
dc.typeThesis
thesis.degree.grantorAuckland University of Technology
thesis.degree.nameMaster of Computer and Information Sciences
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
XueL.pdf
Size:
3.07 MB
Format:
Adobe Portable Document Format
Description:
Thesis
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
889 B
Format:
Item-specific license agreed upon to submission
Description:
Collections