Abstract
We present an automated system, able to detect, store and track suitable landmark features from an unknown environment during goal-directed navigation. Stereo vision is used to obtain depth information. The metric information of the scene geometry is stored and then used to locate the robot. If a feature is no longer in view, the robot searches for new features to replace the lost ones. Localization performance improves on that achieved using odometry alone.

The robot has no previous knowledge of its environment. It first scans the scene for possible features and selects the best features using the KLT algorithm. Once the features are selected from both the left and right image, they are processed to match the features on both images. The locations of these features are then computed in a local robot-centered coordinate system, and finally with respect to a reference coordinate system.

At regular intervals, the robot uses exteroreceptive sensors to obtain geometric information from the environment. These are computed in a local robot-centered coordinate system. Between two such positions, the robot uses proprioceptive sensors to measure its own change of position.

 
Acknowledgements
This project would not have been possible without the support and encouragement of wonderful people. Thanks are especially in order to my research supervisor, Professor Michael Wingate, who patiently guided my research through more than a few difficult hurdles, and whose enthusiasm for the research was an inspiration.

I would also like to thank Kevin Dowsey from Turnkey Solutions, for software and hardware support.

Finally, I’d like to thank all my family for their love and support, and for giving me the freedom and encouragement to work on this project.

 
1 Introduction
    The localization of a mobile robot is extremely useful in any problem where we need to know the robot’s position for navigation. Most mobile robots need three basic components to operate correctly; a representation of their environment, a method for navigating in the environment, and a method for determining its position in the environment. This paper is concerned only in the estimation of the robot’s position.

    A naïve approach to this problem is to use odometers to measure the displacement of the robot. This approach, known as dead reckoning, is subject to errors due to external factors beyond the robot’s control, such as wheel slippage, or collision. More importantly, dead reckoning errors increase without bound unless the robot employs sensor feedback in order to recalibrate its position estimate [8].

    An important issue in robot localization is obtaining domain independence. Other solutions use stored maps to navigate. In this work the robot has no previous knowledge of the domain, and creates its own map based on learned landmarks. This allows the robot to operate in any environment.

    The basis of this work is to detect suitable features from a pair of stereo images. This is done automatically by the robot instead of needing a human operator to choose features as reported in [11], [9], [6]. The ability to detect suitable features allows the robot to handle real world applications including exploration.

    Applications of this type of localization include exploration of new terrain. The success of the Mars Pathfinder mission has shown the possibility of sending unmanned robot’s to explore and map distant planets. The development of autonomous vehicles will definitely be a significant factor of many types of exploration, especially where working conditions are hazardous to human health.
     


Homepage | Table of Contents


This page hosted by  Get your own Free Home Page
 

Last update: 25/08/99

Comments, suggestions and queries to manuel@earthling.net.

Copyright © 1999 Manuel Noriega.