MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera
Visual-Inertial Navigation System
- URL: http://arxiv.org/abs/2006.15699v1
- Date: Sun, 28 Jun 2020 20:16:08 GMT
- Title: MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera
Visual-Inertial Navigation System
- Authors: Kevin Eckenhoff, Patrick Geneva, and Guoquan Huang
- Abstract summary: We propose a real-time consistent multi-IMU multi-camera (CMU)-VINS estimator for visual-inertial navigation systems.
Within an efficient multi-state constraint filter, the proposed MIMC-VINS algorithm optimally fuses asynchronous measurements from all sensors.
The proposed MIMC-VINS is validated in both Monte-Carlo simulations and real-world experiments.
- Score: 44.76768683036822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As cameras and inertial sensors are becoming ubiquitous in mobile devices and
robots, it holds great potential to design visual-inertial navigation systems
(VINS) for efficient versatile 3D motion tracking which utilize any (multiple)
available cameras and inertial measurement units (IMUs) and are resilient to
sensor failures or measurement depletion. To this end, rather than the standard
VINS paradigm using a minimal sensing suite of a single camera and IMU, in this
paper we design a real-time consistent multi-IMU multi-camera (MIMC)-VINS
estimator that is able to seamlessly fuse multi-modal information from an
arbitrary number of uncalibrated cameras and IMUs. Within an efficient
multi-state constraint Kalman filter (MSCKF) framework, the proposed MIMC-VINS
algorithm optimally fuses asynchronous measurements from all sensors, while
providing smooth, uninterrupted, and accurate 3D motion tracking even if some
sensors fail. The key idea of the proposed MIMC-VINS is to perform high-order
on-manifold state interpolation to efficiently process all available visual
measurements without increasing the computational burden due to estimating
additional sensors' poses at asynchronous imaging times. In order to fuse the
information from multiple IMUs, we propagate a joint system consisting of all
IMU states while enforcing rigid-body constraints between the IMUs during the
filter update stage. Lastly, we estimate online both spatiotemporal extrinsic
and visual intrinsic parameters to make our system robust to errors in prior
sensor calibration. The proposed system is extensively validated in both
Monte-Carlo simulations and real-world experiments.
Related papers
- Graph-Based Multi-Modal Sensor Fusion for Autonomous Driving [3.770103075126785]
We introduce a novel approach to multi-modal sensor fusion, focusing on developing a graph-based state representation.
We present a Sensor-Agnostic Graph-Aware Kalman Filter, the first online state estimation technique designed to fuse multi-modal graphs.
We validate the effectiveness of our proposed framework through extensive experiments conducted on both synthetic and real-world driving datasets.
arXiv Detail & Related papers (2024-11-06T06:58:17Z) - Traj-LIO: A Resilient Multi-LiDAR Multi-IMU State Estimator Through
Sparse Gaussian Process [20.452961476175812]
We introduce a multi-LiDAR multi-IMU state estimator by taking advantage of Gaussian Process (GP)
Our proposed approach is capable of handling different sensor configurations and resilient to sensor failures.
To contribute to the community, we will make our source code publicly available.
arXiv Detail & Related papers (2024-02-14T14:08:06Z) - Multi-Visual-Inertial System: Analysis, Calibration and Estimation [26.658649118048032]
We study state estimation of multi-visual-inertial systems (MVIS) and develop sensor fusion algorithms.
We are interested in the full calibration of the associated visual-inertial sensors.
arXiv Detail & Related papers (2023-08-10T02:47:36Z) - Multi-Modal 3D Object Detection by Box Matching [109.43430123791684]
We propose a novel Fusion network by Box Matching (FBMNet) for multi-modal 3D detection.
With the learned assignments between 3D and 2D object proposals, the fusion for detection can be effectively performed by combing their ROI features.
arXiv Detail & Related papers (2023-05-12T18:08:51Z) - Towards Scale-Aware, Robust, and Generalizable Unsupervised Monocular
Depth Estimation by Integrating IMU Motion Dynamics [74.1720528573331]
Unsupervised monocular depth and ego-motion estimation has drawn extensive research attention in recent years.
We propose DynaDepth, a novel scale-aware framework that integrates information from vision and IMU motion dynamics.
We validate the effectiveness of DynaDepth by conducting extensive experiments and simulations on the KITTI and Make3D datasets.
arXiv Detail & Related papers (2022-07-11T07:50:22Z) - AFT-VO: Asynchronous Fusion Transformers for Multi-View Visual Odometry
Estimation [39.351088248776435]
We propose AFT-VO, a novel transformer-based sensor fusion architecture to estimate VO from multiple sensors.
Our framework combines predictions from asynchronous multi-view cameras and accounts for the time discrepancies of measurements coming from different sources.
Our experiments demonstrate that multi-view fusion for VO estimation provides robust and accurate trajectories, outperforming the state of the art in both challenging weather and lighting conditions.
arXiv Detail & Related papers (2022-06-26T19:29:08Z) - Learning Online Multi-Sensor Depth Fusion [100.84519175539378]
SenFuNet is a depth fusion approach that learns sensor-specific noise and outlier statistics.
We conduct experiments with various sensor combinations on the real-world CoRBS and Scene3D datasets.
arXiv Detail & Related papers (2022-04-07T10:45:32Z) - siaNMS: Non-Maximum Suppression with Siamese Networks for Multi-Camera
3D Object Detection [65.03384167873564]
A siamese network is integrated into the pipeline of a well-known 3D object detector approach.
associations are exploited to enhance the 3D box regression of the object.
The experimental evaluation on the nuScenes dataset shows that the proposed method outperforms traditional NMS approaches.
arXiv Detail & Related papers (2020-02-19T15:32:38Z) - Learning Selective Sensor Fusion for States Estimation [47.76590539558037]
We propose SelectFusion, an end-to-end selective sensor fusion module.
During prediction, the network is able to assess the reliability of the latent features from different sensor modalities.
We extensively evaluate all fusion strategies in both public datasets and on progressively degraded datasets.
arXiv Detail & Related papers (2019-12-30T20:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.