Publication

Fusing visual contour tracking with inertial sensing to recover robot egomotion

Conference Article

Conference

Workshop on Integration of Vision and Inertial Sensors (INERVIS)

Edition

1st

Pages

1881-1898

Doc link

http://paloma.isr.uc.pt/inervis/inervis2003/

File

Download the digital copy of the doc pdf document

Abstract

A method for estimating mobile robot egomotion is presented, which relies on tracking contours in real-time images acquired with an uncalibrated monocular video system. After fitting an active contour to an object in the image, 3D motion is derived from the affine deformations suffered by the contour in an image sequence. More than one object can be tracked at the same time yielding some different pose estimations. Then, improvements in pose determination are achieved by fusing all these different estimations. Inertial information is used to obtain better estimates, as it introduces in the tracking algorithm a measure of the real velocity. Inertial information is also used to eliminate some ambiguities arising from the use of a monocular image sequence. As the algorithms developed are intended to be used in real-time control systems, considerations on computation costs are taken into account.


Categories

computer vision.

Scientific reference

G. Alenyà, E. Martinez and C. Torras. Fusing visual contour tracking with inertial sensing to recover robot egomotion, 1st Workshop on Integration of Vision and Inertial Sensors, 2003, Coimbra, Portugal, pp. 1881-1898, Universidade de Coimbra.