OrcVIO: Object residual constrained Visual-Inertial Odometry
- URL: http://arxiv.org/abs/2007.15107v3
- Date: Sat, 29 May 2021 21:22:36 GMT
- Title: OrcVIO: Object residual constrained Visual-Inertial Odometry
- Authors: Mo Shan, Vikas Dhiman, Qiaojun Feng, Jinzhao Li and Nikolay Atanasov
- Abstract summary: This work presents OrcVIO, for visual-inertial odometry tightly coupled with tracking and optimization over structured object models.
The ability of OrcVIO for accurate trajectory estimation and large-scale object-level mapping is evaluated using real data.
- Score: 18.3130718336919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Introducing object-level semantic information into simultaneous localization
and mapping (SLAM) system is critical. It not only improves the performance but
also enables tasks specified in terms of meaningful objects. This work presents
OrcVIO, for visual-inertial odometry tightly coupled with tracking and
optimization over structured object models. OrcVIO differentiates through
semantic feature and bounding-box reprojection errors to perform batch
optimization over the pose and shape of objects. The estimated object states
aid in real-time incremental optimization over the IMU-camera states. The
ability of OrcVIO for accurate trajectory estimation and large-scale
object-level mapping is evaluated using real data.
Related papers
- VOOM: Robust Visual Object Odometry and Mapping using Hierarchical
Landmarks [19.789761641342043]
We propose a Visual Object Odometry and Mapping framework VOOM.
We use high-level objects and low-level points as the hierarchical landmarks in a coarse-to-fine manner.
VOOM outperforms both object-oriented SLAM and feature points SLAM systems in terms of localization.
arXiv Detail & Related papers (2024-02-21T08:22:46Z) - Semantic Object-level Modeling for Robust Visual Camera Relocalization [14.998133272060695]
We propose a novel method of automatic object-level voxel modeling for accurate ellipsoidal representations of objects.
All of these modules are entirely intergrated into visual SLAM system.
arXiv Detail & Related papers (2024-02-10T13:39:44Z) - Estimating Material Properties of Interacting Objects Using Sum-GP-UCB [17.813871065276636]
We present a Bayesian optimization approach to identifying the material property parameters of objects based on a set of observations.
We show that our method can effectively perform incremental learning without resetting the rewards of the gathered observations.
arXiv Detail & Related papers (2023-10-18T07:16:06Z) - An Object SLAM Framework for Association, Mapping, and High-Level Tasks [12.62957558651032]
We present a comprehensive object SLAM framework that focuses on object-based perception and object-oriented robot tasks.
A range of public datasets and real-world results have been used to evaluate the proposed object SLAM framework for its efficient performance.
arXiv Detail & Related papers (2023-05-12T08:10:14Z) - 3D Video Object Detection with Learnable Object-Centric Global
Optimization [65.68977894460222]
Correspondence-based optimization is the cornerstone for 3D scene reconstruction but is less studied in 3D video object detection.
We propose BA-Det, an end-to-end optimizable object detector with object-centric temporal correspondence learning and featuremetric object bundle adjustment.
arXiv Detail & Related papers (2023-03-27T17:39:39Z) - Boosting Object Representation Learning via Motion and Object Continuity [22.512380611375846]
We propose to exploit object motion and continuity, i.e., objects do not pop in and out of existence.
The resulting Motion and Object Continuity scheme can be instantiated using any baseline object detection model.
Our results show large improvements in the performances of a SOTA model in terms of object discovery, convergence speed and overall latent object representations.
arXiv Detail & Related papers (2022-11-16T09:36:41Z) - Object Detection in Aerial Images with Uncertainty-Aware Graph Network [61.02591506040606]
We propose a novel uncertainty-aware object detection framework with a structured-graph, where nodes and edges are denoted by objects.
We refer to our model as Uncertainty-Aware Graph network for object DETection (UAGDet)
arXiv Detail & Related papers (2022-08-23T07:29:03Z) - Information-Theoretic Odometry Learning [83.36195426897768]
We propose a unified information theoretic framework for learning-motivated methods aimed at odometry estimation.
The proposed framework provides an elegant tool for performance evaluation and understanding in information-theoretic language.
arXiv Detail & Related papers (2022-03-11T02:37:35Z) - Objects are Different: Flexible Monocular 3D Object Detection [87.82253067302561]
We propose a flexible framework for monocular 3D object detection which explicitly decouples the truncated objects and adaptively combines multiple approaches for object depth estimation.
Experiments demonstrate that our method outperforms the state-of-the-art method by relatively 27% for the moderate level and 30% for the hard level in the test set of KITTI benchmark.
arXiv Detail & Related papers (2021-04-06T07:01:28Z) - Progressive Self-Guided Loss for Salient Object Detection [102.35488902433896]
We present a progressive self-guided loss function to facilitate deep learning-based salient object detection in images.
Our framework takes advantage of adaptively aggregated multi-scale features to locate and detect salient objects effectively.
arXiv Detail & Related papers (2021-01-07T07:33:38Z) - Category Level Object Pose Estimation via Neural Analysis-by-Synthesis [64.14028598360741]
In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module.
The image synthesis network is designed to efficiently span the pose configuration space.
We experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone.
arXiv Detail & Related papers (2020-08-18T20:30:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.