Generating Automated Vehicle Testing Scenarios Using Generative Adversarial Network
dc.contributor.advisor | Ma, Jing | |
dc.contributor.advisor | Lai, Edmund M-K | |
dc.contributor.author | Xue, Longjiang | |
dc.date.accessioned | 2023-06-20T22:36:47Z | |
dc.date.available | 2023-06-20T22:36:47Z | |
dc.date.issued | 2023 | |
dc.description.abstract | In the field of autonomous driving, generating semantic segmentation images from road scene images is of utmost importance. This allows researchers to test and validate autonomous vehicle functions and algorithms using synthetic traffic scenes without actually driving on public roads. Generating traffic scenes using existing images has emerged as a significant research area, with generative adversarial networks (GANs) being a popular choice for image-to-image conversion in computer vision. In this thesis, we have utilized both traditional conditional adversarial networks, specifically the pix2pix model, as well as the GANformer method, which is a novel and efficient type of transformer, for generating new images. We have then proposed a new methodology by combining the GANformer and pix2pix models to perform performance tests. To train our models, we have used the Cityscapes dataset, which consists of 5,000 high-quality road scene images and their corresponding manually annotated semantic segmentation images. This dataset serves as a valuable resource for training and evaluating the performance of our proposed models. The experimental results show that the third methodology which combines pix2pix and GANformer outperforms the other two methods in generating semantic segmentation 2 images with higher accuracy and less ambiguity. This experiment also demonstrates the different performances of these three methods when converting road scene images into semantic segmentation images. The findings from our experiments highlight that incorporating pix2pix into GANformer results in higher accuracy and better performance in the context of traffic scenes. This has significant implications for areas such as autonomous driving testing and validation, where accurate and reliable results are crucial. The use of our proposed methodology has the potential to improve the accuracy and reliability of generating semantic segmentation images for traffic scenes, contributing to the development of more robust and efficient autonomous vehicle algorithms and functions. | |
dc.identifier.uri | https://hdl.handle.net/10292/16300 | |
dc.language.iso | en | |
dc.publisher | Auckland University of Technology | |
dc.rights.accessrights | OpenAccess | |
dc.subject | Autonomous Driving | |
dc.subject | Generative Adversarial Network | |
dc.subject | GANformer | |
dc.subject | pix2Pix | |
dc.subject | Traffic scene image | |
dc.title | Generating Automated Vehicle Testing Scenarios Using Generative Adversarial Network | |
dc.type | Thesis | |
thesis.degree.grantor | Auckland University of Technology | |
thesis.degree.name | Master of Computer and Information Sciences |