Robot mapping without a precise map
In this thesis, I took two key ideas of cognitive mapping developed in Yeap’s (1988) theory of cognitive mapping and developed a robot mapping system that maps without having a precise map.
Yeap argues that in a cognitive mapping process, it is important to compute a local space representation that affords boundedness and a global map that tells one roughly where things are located with respect to the self. While these two representations appear to be similar to the global map and a topological network of local spaces that robotics researchers compute for their robots, there are two major differences. First, Yeap’s global map is a transient, inexact map and second, the local space computed is often inexact and incomplete. Computing such representations meant that one does not need to correct errors due to sensors and generate an exact map.
I have successfully developed one such algorithm and tested it successfully on a mobile robot equipped with laser and odometer sensors in a large office environment. The journey through the environment before loop closing is about 30m x 30m. The robot went round the environment twice and in a clockwise and anti-clockwise direction. It also finds its way from one part of the environment to another.
There are three key steps in my approach. The first step is to constantly detect “landmarks” in two consecutive views (the current view and the previous view). Having two consecutive views meant that any errors due to the sensors are not accumulative. Furthermore, one gets two copies of the landmark – one currently in view and the other in memory. Consequently, their position in space need not be absolute. The second step is to use the landmarks identified to provide a frame of reference to localize unknown surfaces that appear in the current view. The third is to enter those unknown surfaces into its global map using its own landmarks.
The development of such an algorithm has led to better insights into cognitive and robot mapping. From a cognitive standpoint, what is important is that we now have an algorithm that computes an inexact map by attending to recognizable surfaces (referred to as landmark surfaces) in successive views rather than dependent on continuous tracking of one’s position and orientation in the environment. Furthermore, it does not require continuous updating of the map as long as there are some overlapping surfaces between views. Both are characteristics of the human cognitive mapping process. From a robot mapping standpoint, my new algorithm shows that it is possible to compute and utilize an inexact map for navigation. This could be a new paradigm for robot mapping.