Computing a Perceptual Map Using a Stereo-Vision Mobile Robot
Keywords:
Human Perceptual Map, Mobile Robot, Stereo-Vision Images, SLAM,Abstract
A new computational model of how humans integrate successive “local environments” obtained as views at limiting points in the environment to create a perceptual map has been proposed and validated using a laser-ranging mobile robot. Compared with the SLAM (Simultaneous Localization and Mapping)-based approach, the proposed process is less computationally demanding and provides an interesting account of how humans compute their cognitive maps. Since vision plays an important role in how humans compute their maps, we extend the previous work by implementing the model using a vision-based mobile robot. Specifically, our model takes a prerecorded series of stereo-vision images of a large indoor environment at USM and produces a perceptual map. The results show that the model is not dependent on the use of a laser-ranging device and this is significant if the model is intended as a cognitive model of spatial cognition.Downloads
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)