Visual-Inertial Odometry with Online Calibration of Velocity-Control
Based Kinematic Motion Models
- URL: http://arxiv.org/abs/2204.06776v3
- Date: Tue, 18 Apr 2023 09:45:42 GMT
- Title: Visual-Inertial Odometry with Online Calibration of Velocity-Control
Based Kinematic Motion Models
- Authors: Haolong Li and Joerg Stueckler
- Abstract summary: Visual-inertial odometry (VIO) is an important technology for autonomous robots with power and payload constraints.
We propose a novel approach for VIO with stereo cameras which integrates and calibrates the velocity-control based kinematic motion model of wheeled mobile robots online.
- Score: 3.42658286826597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual-inertial odometry (VIO) is an important technology for autonomous
robots with power and payload constraints. In this paper, we propose a novel
approach for VIO with stereo cameras which integrates and calibrates the
velocity-control based kinematic motion model of wheeled mobile robots online.
Including such a motion model can help to improve the accuracy of VIO. Compared
to several previous approaches proposed to integrate wheel odometer
measurements for this purpose, our method does not require wheel encoders and
can be applied when the robot motion can be modeled with velocity-control based
kinematic motion model. We use radial basis function (RBF) kernels to
compensate for the time delay and deviations between control commands and
actual robot motion. The motion model is calibrated online by the VIO system
and can be used as a forward model for motion control and planning. We evaluate
our approach with data obtained in variously sized indoor environments,
demonstrate improvements over a pure VIO method, and evaluate the prediction
accuracy of the online calibrated model.
Related papers
- VidMan: Exploiting Implicit Dynamics from Video Diffusion Model for Effective Robot Manipulation [79.00294932026266]
VidMan is a novel framework that employs a two-stage training mechanism to enhance stability and improve data utilization efficiency.
Our framework outperforms state-of-the-art baseline model GR-1 on the CALVIN benchmark, achieving a 11.7% relative improvement, and demonstrates over 9% precision gains on the OXE small-scale dataset.
arXiv Detail & Related papers (2024-11-14T03:13:26Z) - Generalizable Implicit Motion Modeling for Video Frame Interpolation [51.966062283735596]
Motion is critical in flow-based Video Frame Interpolation (VFI)
We introduce General Implicit Motion Modeling (IMM), a novel and effective approach to motion modeling VFI.
Our GIMM can be easily integrated with existing flow-based VFI works by supplying accurately modeled motion.
arXiv Detail & Related papers (2024-07-11T17:13:15Z) - Event-Aided Time-to-Collision Estimation for Autonomous Driving [28.13397992839372]
We present a novel method that estimates the time to collision using a neuromorphic event-based camera.
The proposed algorithm consists of a two-step approach for efficient and accurate geometric model fitting on event data.
Experiments on both synthetic and real data demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-07-10T02:37:36Z) - Neural Implicit Swept Volume Models for Fast Collision Detection [0.0]
We present an algorithm combining the speed of the deep learning-based signed distance computations with the strong accuracy guarantees of geometric collision checkers.
We validate our approach in simulated and real-world robotic experiments, and demonstrate that it is able to speed up a commercial bin picking application.
arXiv Detail & Related papers (2024-02-23T12:06:48Z) - OptiState: State Estimation of Legged Robots using Gated Networks with Transformer-based Vision and Kalman Filtering [42.817893456964]
State estimation for legged robots is challenging due to their highly dynamic motion and limitations imposed by sensor accuracy.
We propose a hybrid solution that combines proprioception and exteroceptive information for estimating the state of the robot's trunk.
This framework not only furnishes accurate robot state estimates, but can minimize the nonlinear errors that arise from sensor measurements and model simplifications through learning.
arXiv Detail & Related papers (2024-01-30T03:34:25Z) - Online Calibration of a Single-Track Ground Vehicle Dynamics Model by Tight Fusion with Visual-Inertial Odometry [8.165828311550152]
We present ST-VIO, a novel approach which tightly fuses a single-track dynamics model for wheeled ground vehicles with visual inertial odometry (VIO)
Our method calibrates and adapts the dynamics model online to improve the accuracy of forward prediction conditioned on future control inputs.
arXiv Detail & Related papers (2023-09-20T08:50:30Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Unified Data Collection for Visual-Inertial Calibration via Deep
Reinforcement Learning [24.999540933593273]
This work presents a novel formulation to learn a motion policy to be executed on a robot arm for automatic data collection.
Our approach models the calibration process compactly using model-free deep reinforcement learning.
In simulation we are able to perform calibrations 10 times faster than hand-crafted policies, which transfers to a real-world speed up of 3 times over a human expert.
arXiv Detail & Related papers (2021-09-30T10:03:56Z) - MotionHint: Self-Supervised Monocular Visual Odometry with Motion
Constraints [70.76761166614511]
We present a novel self-supervised algorithm named MotionHint for monocular visual odometry (VO)
Our MotionHint algorithm can be easily applied to existing open-sourced state-of-the-art SSM-VO systems.
arXiv Detail & Related papers (2021-09-14T15:35:08Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.