Level Set-Based Camera Pose Estimation From Multiple 2D/3D
Ellipse-Ellipsoid Correspondences
- URL: http://arxiv.org/abs/2207.07953v1
- Date: Sat, 16 Jul 2022 14:09:54 GMT
- Title: Level Set-Based Camera Pose Estimation From Multiple 2D/3D
Ellipse-Ellipsoid Correspondences
- Authors: Matthieu Zins, Gilles Simon, Marie-Odile Berger
- Abstract summary: We show that the definition of a cost function characterizing the projection of a 3D object onto a 2D object detection is not straightforward.
We develop an ellipse-ellipse cost based on level sets sampling, demonstrate its nice properties for handling partially visible objects and compare its performance with other common metrics.
- Score: 2.016317500787292
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose an object-based camera pose estimation from a
single RGB image and a pre-built map of objects, represented with ellipsoidal
models. We show that contrary to point correspondences, the definition of a
cost function characterizing the projection of a 3D object onto a 2D object
detection is not straightforward. We develop an ellipse-ellipse cost based on
level sets sampling, demonstrate its nice properties for handling partially
visible objects and compare its performance with other common metrics. Finally,
we show that the use of a predictive uncertainty on the detected ellipses
allows a fair weighting of the contribution of the correspondences which
improves the computed pose. The code is released at
https://gitlab.inria.fr/tangram/level-set-based-camera-pose-estimation.
Related papers
- LocaliseBot: Multi-view 3D object localisation with differentiable
rendering for robot grasping [9.690844449175948]
We focus on object pose estimation.
Our approach relies on three pieces of information: multiple views of the object, the camera's parameters at those viewpoints, and 3D CAD models of objects.
We show that the estimated object pose results in 99.65% grasp accuracy with the ground truth grasp candidates.
arXiv Detail & Related papers (2023-11-14T14:27:53Z) - CheckerPose: Progressive Dense Keypoint Localization for Object Pose
Estimation with Graph Neural Network [66.24726878647543]
Estimating the 6-DoF pose of a rigid object from a single RGB image is a crucial yet challenging task.
Recent studies have shown the great potential of dense correspondence-based solutions.
We propose a novel pose estimation algorithm named CheckerPose, which improves on three main aspects.
arXiv Detail & Related papers (2023-03-29T17:30:53Z) - Occupancy Planes for Single-view RGB-D Human Reconstruction [120.5818162569105]
Single-view RGB-D human reconstruction with implicit functions is often formulated as per-point classification.
We propose the occupancy planes (OPlanes) representation, which enables to formulate single-view RGB-D human reconstruction as occupancy prediction on planes which slice through the camera's view frustum.
arXiv Detail & Related papers (2022-08-04T17:59:56Z) - Neural Correspondence Field for Object Pose Estimation [67.96767010122633]
We propose a method for estimating the 6DoF pose of a rigid object with an available 3D model from a single RGB image.
Unlike classical correspondence-based methods which predict 3D object coordinates at pixels of the input image, the proposed method predicts 3D object coordinates at 3D query points sampled in the camera frustum.
arXiv Detail & Related papers (2022-07-30T01:48:23Z) - What's in your hands? 3D Reconstruction of Generic Objects in Hands [49.12461675219253]
Our work aims to reconstruct hand-held objects given a single RGB image.
In contrast to prior works that typically assume known 3D templates and reduce the problem to 3D pose estimation, our work reconstructs generic hand-held object without knowing their 3D templates.
arXiv Detail & Related papers (2022-04-14T17:59:02Z) - Object-Based Visual Camera Pose Estimation From Ellipsoidal Model and
3D-Aware Ellipse Prediction [2.016317500787292]
We propose a method for initial camera pose estimation from just a single image.
It exploits the ability of deep learning techniques to reliably detect objects regardless of viewing conditions.
Experiments prove that the accuracy of the computed pose significantly increases thanks to our method.
arXiv Detail & Related papers (2022-03-09T10:00:52Z) - Stochastic Modeling for Learnable Human Pose Triangulation [0.7646713951724009]
We propose a modeling framework for 3D human pose triangulation and evaluate its performance across different datasets and spatial camera arrangements.
The proposed pose triangulation model successfully generalizes to different camera arrangements and between two public datasets.
arXiv Detail & Related papers (2021-10-01T09:26:25Z) - Category-Level Metric Scale Object Shape and Pose Estimation [73.92460712829188]
We propose a framework that jointly estimates a metric scale shape and pose from a single RGB image.
We validated our method on both synthetic and real-world datasets to evaluate category-level object pose and shape.
arXiv Detail & Related papers (2021-09-01T12:16:46Z) - 3D-Aware Ellipse Prediction for Object-Based Camera Pose Estimation [3.103806775802078]
We propose a method for coarse camera pose computation which is robust to viewing conditions.
It exploits the ability of deep learning techniques to reliably detect objects regardless of viewing conditions.
arXiv Detail & Related papers (2021-05-24T18:40:18Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.