Improved stixels towards efficient traffic-scene representations

aut.embargoNoen_NZ
aut.thirdpc.containsNoen_NZ
dc.contributor.advisorKlette, Reinhard
dc.contributor.advisorRezaei, Mahdi
dc.contributor.authorAl-Ani, Noor
dc.date.accessioned2019-06-09T21:41:16Z
dc.date.available2019-06-09T21:41:16Z
dc.date.copyright2019
dc.date.issued2019
dc.date.updated2019-06-07T10:10:40Z
dc.description.abstractStixels are medium-level data representations used for the development of computer vision modules for self-driving cars. A stixel is a column of stacked space cubes ranging from the road surface to the visual end of an obstacle. A stixel represents object height at a distance. It supports object detection and recognition regardless of their specific appearance. Stixel calculations are commonly based on binocular vision; these calculations map millions of pixel disparities into a few hundred stixels. Depending on applied stereo vision, this binocular approach is sometimes incapable to deal with low-textured road information or noisy data. The main objectiveofthisworkistoevaluateandproposeapproachesforcalculatingstixels using different camera configurations and,possibly,also a LiDAR range sensor. This study also highlights the role of ground manifold modelling for stixel calculations. By using simplifying ground manifold models, calculated stixels may suffer from noise, inconsistency, and false-detection rates for obstacles, especially in challenging datasets. Stixel calculations can be improved with respect to accuracy and robustness by using more adaptive ground manifold approximations. A comparative study of stixel results, obtained for different ground-manifold models, also defines a main contribution of this thesis. We also consider multi-layer stixel calculations. Comprehensive experiments are performed on two publicly available challenging datasets. We also use a novel way for comparing calculated stixels with ground truth. We compare depth information, as given by extracted stixels, with ground-truth depth, provided by depth measurements using a highly accurate LiDAR range sensor (as available in one of the public datasets). Experimental results also include quantitative evaluations of the trade-off between accuracy and run time. The results show significant improvements for particular ways of calculating stixels.en_NZ
dc.identifier.urihttps://hdl.handle.net/10292/12545
dc.language.isoenen_NZ
dc.publisherAuckland University of Technology
dc.rights.accessrightsOpenAccess
dc.subjectStixelsen_NZ
dc.subjectground manifolden_NZ
dc.subjectv-disparityen_NZ
dc.subjecty-dispariten_NZ
dc.subject,monocularen_NZ
dc.subjectbinoculaen_NZ
dc.subjecttrinocularen_NZ
dc.subjectobstacle heighen_NZ
dc.subjectdynamic programmingen_NZ
dc.subjectLiDARen_NZ
dc.subjectheight segmentationen_NZ
dc.subjectmulti-layer stixelsen_NZ
dc.titleImproved stixels towards efficient traffic-scene representationsen_NZ
dc.typeThesisen_NZ
thesis.degree.grantorAuckland University of Technology
thesis.degree.levelDoctoral Theses
thesis.degree.nameDoctor of Philosophyen_NZ
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Al-AniN.pdf
Size:
67.3 MB
Format:
Adobe Portable Document Format
Description:
Thesis
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
889 B
Format:
Item-specific license agreed upon to submission
Description:
Collections