ELLIPSDF: Joint Object Pose and Shape Optimization with a Bi-level
Ellipsoid and Signed Distance Function Description
- URL: http://arxiv.org/abs/2108.00355v1
- Date: Sun, 1 Aug 2021 03:07:31 GMT
- Title: ELLIPSDF: Joint Object Pose and Shape Optimization with a Bi-level
Ellipsoid and Signed Distance Function Description
- Authors: Mo Shan, Qiaojun Feng, You-Yi Jau, Nikolay Atanasov
- Abstract summary: This paper proposes an expressive yet compact model for joint object pose and shape optimization.
It infers an object-level map from multi-view RGB-D camera observations.
Our approach is evaluated on the large-scale real-world ScanNet dataset and compared against state-of-the-art methods.
- Score: 9.734266860544663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous systems need to understand the semantics and geometry of their
surroundings in order to comprehend and safely execute object-level task
specifications. This paper proposes an expressive yet compact model for joint
object pose and shape optimization, and an associated optimization algorithm to
infer an object-level map from multi-view RGB-D camera observations. The model
is expressive because it captures the identities, positions, orientations, and
shapes of objects in the environment. It is compact because it relies on a
low-dimensional latent representation of implicit object shape, allowing
onboard storage of large multi-category object maps. Different from other works
that rely on a single object representation format, our approach has a bi-level
object model that captures both the coarse level scale as well as the fine
level shape details. Our approach is evaluated on the large-scale real-world
ScanNet dataset and compared against state-of-the-art methods.
Related papers
- VOOM: Robust Visual Object Odometry and Mapping using Hierarchical
Landmarks [19.789761641342043]
We propose a Visual Object Odometry and Mapping framework VOOM.
We use high-level objects and low-level points as the hierarchical landmarks in a coarse-to-fine manner.
VOOM outperforms both object-oriented SLAM and feature points SLAM systems in terms of localization.
arXiv Detail & Related papers (2024-02-21T08:22:46Z) - FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects [55.77542145604758]
FoundationPose is a unified foundation model for 6D object pose estimation and tracking.
Our approach can be instantly applied at test-time to a novel object without fine-tuning.
arXiv Detail & Related papers (2023-12-13T18:28:09Z) - An Object SLAM Framework for Association, Mapping, and High-Level Tasks [12.62957558651032]
We present a comprehensive object SLAM framework that focuses on object-based perception and object-oriented robot tasks.
A range of public datasets and real-world results have been used to evaluate the proposed object SLAM framework for its efficient performance.
arXiv Detail & Related papers (2023-05-12T08:10:14Z) - Loop Closure Detection Based on Object-level Spatial Layout and Semantic
Consistency [14.694754836704819]
We present an object-based loop closure detection method based on the spatial layout and semanic consistency of the 3D scene graph.
Experimental results demonstrate that our proposed data association approach can construct more accurate 3D semantic maps.
arXiv Detail & Related papers (2023-04-11T11:20:51Z) - Category-level Shape Estimation for Densely Cluttered Objects [94.64287790278887]
We propose a category-level shape estimation method for densely cluttered objects.
Our framework partitions each object in the clutter via the multi-view visual information fusion.
Experiments in the simulated environment and real world show that our method achieves high shape estimation accuracy.
arXiv Detail & Related papers (2023-02-23T13:00:17Z) - MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare [84.80956484848505]
MegaPose is a method to estimate the 6D pose of novel objects, that is, objects unseen during training.
We present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects.
Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.
arXiv Detail & Related papers (2022-12-13T19:30:03Z) - Generative Category-Level Shape and Pose Estimation with Semantic
Primitives [27.692997522812615]
We propose a novel framework for category-level object shape and pose estimation from a single RGB-D image.
To handle the intra-category variation, we adopt a semantic primitive representation that encodes diverse shapes into a unified latent space.
We show that the proposed method achieves SOTA pose estimation performance and better generalization in the real-world dataset.
arXiv Detail & Related papers (2022-10-03T17:51:54Z) - Object Structural Points Representation for Graph-based Semantic
Monocular Localization and Mapping [9.61301182502447]
We propose the use of an efficient representation, based on structural points, for the geometry of objects to be used as landmarks in a monocular semantic SLAM system.
In particular, an inverse depth parametrization is proposed for the landmark nodes in the pose-graph to store object position, orientation and size/scale.
arXiv Detail & Related papers (2022-06-21T11:32:55Z) - Continuous Surface Embeddings [76.86259029442624]
We focus on the task of learning and representing dense correspondences in deformable object categories.
We propose a new, learnable image-based representation of dense correspondences.
We demonstrate that the proposed approach performs on par or better than the state-of-the-art methods for dense pose estimation for humans.
arXiv Detail & Related papers (2020-11-24T22:52:15Z) - Category Level Object Pose Estimation via Neural Analysis-by-Synthesis [64.14028598360741]
In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module.
The image synthesis network is designed to efficiently span the pose configuration space.
We experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone.
arXiv Detail & Related papers (2020-08-18T20:30:47Z) - Improving Semantic Segmentation via Decoupled Body and Edge Supervision [89.57847958016981]
Existing semantic segmentation approaches either aim to improve the object's inner consistency by modeling the global context, or refine objects detail along their boundaries by multi-scale feature fusion.
In this paper, a new paradigm for semantic segmentation is proposed.
Our insight is that appealing performance of semantic segmentation requires textitexplicitly modeling the object textitbody and textitedge, which correspond to the high and low frequency of the image.
We show that the proposed framework with various baselines or backbone networks leads to better object inner consistency and object boundaries.
arXiv Detail & Related papers (2020-07-20T12:11:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.