Object Structural Points Representation for Graph-based Semantic
Monocular Localization and Mapping
- URL: http://arxiv.org/abs/2206.10263v1
- Date: Tue, 21 Jun 2022 11:32:55 GMT
- Title: Object Structural Points Representation for Graph-based Semantic
Monocular Localization and Mapping
- Authors: Davide Tateo, Davide Antonio Cucci, Matteo Matteucci, Andrea Bonarini
- Abstract summary: We propose the use of an efficient representation, based on structural points, for the geometry of objects to be used as landmarks in a monocular semantic SLAM system.
In particular, an inverse depth parametrization is proposed for the landmark nodes in the pose-graph to store object position, orientation and size/scale.
- Score: 9.61301182502447
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efficient object level representation for monocular semantic simultaneous
localization and mapping (SLAM) still lacks a widely accepted solution. In this
paper, we propose the use of an efficient representation, based on structural
points, for the geometry of objects to be used as landmarks in a monocular
semantic SLAM system based on the pose-graph formulation. In particular, an
inverse depth parametrization is proposed for the landmark nodes in the
pose-graph to store object position, orientation and size/scale. The proposed
formulation is general and it can be applied to different geometries; in this
paper we focus on indoor environments where human-made artifacts commonly share
a planar rectangular shape, e.g., windows, doors, cabinets, etc. The approach
can be easily extended to urban scenarios where similar shapes exists as well.
Experiments in simulation show good performance, particularly in object
geometry reconstruction.
Related papers
- GIC: Gaussian-Informed Continuum for Physical Property Identification and Simulation [60.33467489955188]
This paper studies the problem of estimating physical properties (system identification) through visual observations.
To facilitate geometry-aware guidance in physical property estimation, we introduce a novel hybrid framework.
We propose a new dynamic 3D Gaussian framework based on motion factorization to recover the object as 3D Gaussian point sets.
In addition to the extracted object surfaces, the Gaussian-informed continuum also enables the rendering of object masks during simulations.
arXiv Detail & Related papers (2024-06-21T07:37:17Z) - VOOM: Robust Visual Object Odometry and Mapping using Hierarchical
Landmarks [19.789761641342043]
We propose a Visual Object Odometry and Mapping framework VOOM.
We use high-level objects and low-level points as the hierarchical landmarks in a coarse-to-fine manner.
VOOM outperforms both object-oriented SLAM and feature points SLAM systems in terms of localization.
arXiv Detail & Related papers (2024-02-21T08:22:46Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Generative Category-Level Shape and Pose Estimation with Semantic
Primitives [27.692997522812615]
We propose a novel framework for category-level object shape and pose estimation from a single RGB-D image.
To handle the intra-category variation, we adopt a semantic primitive representation that encodes diverse shapes into a unified latent space.
We show that the proposed method achieves SOTA pose estimation performance and better generalization in the real-world dataset.
arXiv Detail & Related papers (2022-10-03T17:51:54Z) - Learning to Complete Object Shapes for Object-level Mapping in Dynamic
Scenes [30.500198859451434]
We propose a novel object-level mapping system that can simultaneously segment, track, and reconstruct objects in dynamic scenes.
It can further predict and complete their full geometries by conditioning on reconstructions from depth inputs and a category-level shape prior.
We evaluate its effectiveness by quantitatively and qualitatively testing it in both synthetic and real-world sequences.
arXiv Detail & Related papers (2022-08-09T22:56:33Z) - Object-Augmented RGB-D SLAM for Wide-Disparity Relocalisation [3.888848425698769]
We propose a novel object-augmented RGB-D SLAM system that is capable of constructing a consistent object map and performing relocalisation based on centroids of objects in the map.
arXiv Detail & Related papers (2021-08-05T11:02:25Z) - ELLIPSDF: Joint Object Pose and Shape Optimization with a Bi-level
Ellipsoid and Signed Distance Function Description [9.734266860544663]
This paper proposes an expressive yet compact model for joint object pose and shape optimization.
It infers an object-level map from multi-view RGB-D camera observations.
Our approach is evaluated on the large-scale real-world ScanNet dataset and compared against state-of-the-art methods.
arXiv Detail & Related papers (2021-08-01T03:07:31Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - Continuous Surface Embeddings [76.86259029442624]
We focus on the task of learning and representing dense correspondences in deformable object categories.
We propose a new, learnable image-based representation of dense correspondences.
We demonstrate that the proposed approach performs on par or better than the state-of-the-art methods for dense pose estimation for humans.
arXiv Detail & Related papers (2020-11-24T22:52:15Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z) - Category Level Object Pose Estimation via Neural Analysis-by-Synthesis [64.14028598360741]
In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module.
The image synthesis network is designed to efficiently span the pose configuration space.
We experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone.
arXiv Detail & Related papers (2020-08-18T20:30:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.