Combining object recognition and metric mapping for spatial modeling with mobile robots.
Given an autonomous, mobile robot system, which is able to look, listen, move and recognise pieces of its environment, how should it act if it woke up in an unknown place?
The aim of this work is to enable a mobile robot, which has a video camera and a laser sensor, to be able to move around in rooms of a building looking for objects it can recognise.
A method to decide how a room should be examined is presented, as well as a vision system which implements algorithms to perform object recognition on flat colour images. The vision system makes use of a pan-tilt-zoom camera to search a scene in several steps, looking for objects by taking images at different zoom levels and by applying object recognition algorithms, based on receptive field cooccurrence histograms (RFCH) and on comparison between significant points, extracted with a scale-invariant feature transform (SIFT) method. The RFCH algorithm is used to detect zones where an object may be inside an image, so a further zoom can be applied in those places so that the final recognition process can be done with the SIFT algorithm, when the object is augmented enough.
The method for exploring a room, which is mentioned above, gives a greedy combination of points where the robot has to go and places it has to look at, along with the order these points must be covered to decrease the search time. This method consists of turning a two dimension map into an occupancy grid, so occupied places can be grouped, according to certain conditions, to decide from which position it is better to look at that corner.