Rotational and translational bias estimation based on depth and image measurements - Mines Paris
Communication Dans Un Congrès Année : 2012

Rotational and translational bias estimation based on depth and image measurements

Résumé

Constant biases associated to measured linear and angular velocities of a moving object can be estimated from measurements of a static environment by embedded camera and depth sensor. We propose here a Lyapunov-based observer taking advantage of the SO(3)-invariance of the partial differential equations satisfied by the measured brightness and depth fields. The resulting observer is governed by a nonlinear integro/partial differential system whose inputs are the linear/angular velocities and the brightness/depth fields. Convergence analysis is investigated under C3 regularity assumptions on the object motion and its environment. Technically, it relies on Ascoli-Arzela theorem and pre-compacity of the observer trajectories. It ensures asymptotic convergence of the estimated brightness and depth fields. Convergence of the estimated biases is characterized by constraints depending only on the environment. We conjecture that these constraints are automatically satisfied when the environment does not admit any rotational symmetry axis. Such asymptotic observers can be adapted to any realistic camera model. Preliminary simulations with synthetic image and depth data (corrupted by noise around 10%) indicate that such Lyapunov-based observers converge for much weaker regularity assumptions.
Fichier non déposé

Dates et versions

hal-00787792 , version 1 (12-02-2013)

Identifiants

  • HAL Id : hal-00787792 , version 1

Citer

Nadège Zarrouati, Pierre Rouchon, Karine Beauchard. Rotational and translational bias estimation based on depth and image measurements. Conference on Decision and Control (2012), Dec 2012, France. pp.6627- 6634. ⟨hal-00787792⟩
336 Consultations
0 Téléchargements

Partager

More