A computational theory of human perceptual mapping

Date
2011-07
Authors
Yeap, WK
Supervisor
Item type
Conference Contribution
Degree name
Journal Title
Journal ISSN
Volume Title
Publisher
Cognitive Science Society
Abstract

This paper presents a new computational theory of how humans integrate successive views to form a perceptual map. Traditionally, this problem has been thought of as a straightforward integration problem whereby position of objects in one view is transformed to the next and combined. However, this step creates a paradoxical situation in human perceptual mapping. On the one hand, the method requires errors to be corrected and the map to be constantly updated, and yet, on the other hand, human perception and memory show a high tolerance for errors and little integration of successive views. A new theory is presented which argues that our perceptual map is computed by combining views only at their limiting points. To do so, one must be able to recognize and track familiar objects across views. The theory has been tested successfully on mobile robots and the lessons learned are discussed.

Description
Keywords
perceptual map , cognitive map , spatial layout , spatial cognition
Source
Cognitive Science, Boston, USA, 2011-07-20 - 2011-07-23, pages 429 - 434
DOI
Publisher's version
Rights statement
Auckland University of Technology (AUT) encourages public access to AUT information and supports the legal use of copyright material in accordance with the Copyright Act 1994 (the Act) and the Privacy Act 1993. Unless otherwise stated, copyright material contained on this site may be in the intellectual property of AUT, a member of staff or third parties. Any commercial exploitation of this material is expressly prohibited without the written permission of the owner.