3 Implementation
    The Robot consists of the TRC Labmate platform, on which the Bisight head was mounted. Two Pulnix cameras were placed on the head to capture images. The visual processing and localization are implemented in C++ on two PCs. One PC hosts a frame grabber for stereo image capture and sends commands through a communications port to the TRC Labmate Platform. The second PC uses two communications ports to control the Bisight, PTV system.
     

    3.1 Approach

       
    The robot begins by taking a pair of stereo snapshots at its starting position. Suitable features are detected in an image using the algorithm developed by Stan Birchfield in [10], based on Kanade-Lucas-Tomasi. This algorithm will be referred to as KLT from this point forward. The KLT algorithm effectively detects features. Not all the features detected will be suitable for tracking. No attempt is made to distinguish between these at the initial stage, but they can be rejected later since they will not be tracked under the constraints of the filter.

    It then attempts to find 25 features in both images and stores them. Once they are stored, it proceeds to match features from the left image to the right image (stereo match). The procedure to stereo match features will be described later, now we’ll focus on the general operation of the robot. The stereo matched features are stored and then used to locate their 3D position in the robot’s coordinate frame. These are added to the robot’s map. With this information, the robot can calculate its position and compare it with its odometry reading. At this point the robot moves to a new position and captures a new pair of stereo images, using the left image to track features its previous position. If some features are not successfully tracked to the next image, it replaces the lost features with new ones. The tracked and replaced features are used to stereo match the current image pair after finding 25 suitable features in the right image. Then the procedure repeats with the addition of updating the map with the replaced features. These are added to the map once they are referenced to the fixed world coordinate frame. The whole procedure is shown in Figure 5.

       
    3.2 Stereo Match Algorithm
       
    To determine which right features corresponds to its left counterpart, it is necessary to calculate the epipolar line in the right image corresponding to each left feature [7]. First the algorithm reads the list of features detected for each image. The list of features provides the coordinates (xL,yL) of each feature in the image saved on disk. Therefore they must be converted to camera pixels, since the size of the image captured by the camera is 752´ 582 pixels and the size of the image saved on disk, and used to detect features is 384´ 288.

    In order to make a good stereo match a prediction of the disparity from left to right image is used. This is approximation is determined with the distance moved by the robot. This value can be obtained from the robot’s odometry.

    Then for each feature in the left image the distance from each right feature to the epipolar line is calculated. The list of feature distance is sorted in ascending order and then the features, which are further than an acceptable distance, filtered out. After a bit of experimentation the value of 20 was found to be sufficient. At that moment the distance between the left feature and the candidates in the right image is calculated. Next the feature with the lowest value of disparity but near a small range around the predicted disparity is chosen as the match.

    Once the matched features are obtained, their 3D position in the robot’s coordinate frame is calculated. This information is used together with the current estimation of the robot’s position to place the features in the fixed world coordinate frame.

    An overview of the steps taken to find the stereo matches is shown in Figure 6.

       
    3.2 Position Estimation

    Position was estimated by calculating the shift in 3D location of each tracked feature. Therefore each tracked feature gives an estimate of the robot’s shift. To minimize errors, outliers outside a distance of 40 cms. of the estimates average were removed. Once the outliers have been filtered, the average of the remaining estimates was calculated to give a better solution. Section 4.6 will present the results at several stages of the experiment.
     
     
     

    Figure 5. Flow Chart of main algorithm.

    Figure 6. Flow chart of algorithm used to stereo match features.


Homepage | Table of Contents


This page hosted by  Get your own Free Home Page
 

Last update: 12/11/99

Comments, suggestions and queries to manuel@earthling.net.

Copyright © 1999 Manuel Noriega.