Point Anywhere: Directed Object Estimation from Omnidirectional Images
- URL: http://arxiv.org/abs/2308.01010v1
- Date: Wed, 2 Aug 2023 08:32:43 GMT
- Title: Point Anywhere: Directed Object Estimation from Omnidirectional Images
- Authors: Nanami Kotani and Asako Kanezaki
- Abstract summary: We propose a method using an omnidirectional camera to eliminate the user/object position constraint and the left/right constraint of the pointing arm.
The proposed method enables highly accurate estimation by repeatedly extracting regions of interest from the equirectangular image.
- Score: 10.152838128195468
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the intuitive instruction methods in robot navigation is a pointing
gesture. In this study, we propose a method using an omnidirectional camera to
eliminate the user/object position constraint and the left/right constraint of
the pointing arm. Although the accuracy of skeleton and object detection is low
due to the high distortion of equirectangular images, the proposed method
enables highly accurate estimation by repeatedly extracting regions of interest
from the equirectangular image and projecting them onto perspective images.
Furthermore, we found that training the likelihood of the target object in
machine learning further improves the estimation accuracy.
Related papers
- Visually Guided Object Grasping [19.71383212064634]
We show how to represent a grasp or more generally, an alignment between two solids in 3-D projective space using an uncalibrated stereo rig.
We perform an analysis of the performances of the visual servoing algorithm and of the grasping precision that can be expected from this type of approach.
arXiv Detail & Related papers (2023-11-21T15:08:17Z) - LocaliseBot: Multi-view 3D object localisation with differentiable
rendering for robot grasping [9.690844449175948]
We focus on object pose estimation.
Our approach relies on three pieces of information: multiple views of the object, the camera's parameters at those viewpoints, and 3D CAD models of objects.
We show that the estimated object pose results in 99.65% grasp accuracy with the ground truth grasp candidates.
arXiv Detail & Related papers (2023-11-14T14:27:53Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - Improving Online Lane Graph Extraction by Object-Lane Clustering [106.71926896061686]
We propose an architecture and loss formulation to improve the accuracy of local lane graph estimates.
The proposed method learns to assign the objects to centerlines by considering the centerlines as cluster centers.
We show that our method can achieve significant performance improvements by using the outputs of existing 3D object detection methods.
arXiv Detail & Related papers (2023-07-20T15:21:28Z) - Rigidity-Aware Detection for 6D Object Pose Estimation [60.88857851869196]
Most recent 6D object pose estimation methods first use object detection to obtain 2D bounding boxes before actually regressing the pose.
We propose a rigidity-aware detection method exploiting the fact that, in 6D pose estimation, the target objects are rigid.
Key to the success of our approach is a visibility map, which we propose to build using a minimum barrier distance between every pixel in the bounding box and the box boundary.
arXiv Detail & Related papers (2023-03-22T09:02:54Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Object-Based Visual Camera Pose Estimation From Ellipsoidal Model and
3D-Aware Ellipse Prediction [2.016317500787292]
We propose a method for initial camera pose estimation from just a single image.
It exploits the ability of deep learning techniques to reliably detect objects regardless of viewing conditions.
Experiments prove that the accuracy of the computed pose significantly increases thanks to our method.
arXiv Detail & Related papers (2022-03-09T10:00:52Z) - 3D-Aware Ellipse Prediction for Object-Based Camera Pose Estimation [3.103806775802078]
We propose a method for coarse camera pose computation which is robust to viewing conditions.
It exploits the ability of deep learning techniques to reliably detect objects regardless of viewing conditions.
arXiv Detail & Related papers (2021-05-24T18:40:18Z) - Spatial Attention Improves Iterative 6D Object Pose Estimation [52.365075652976735]
We propose a new method for 6D pose estimation refinement from RGB images.
Our main insight is that after the initial pose estimate, it is important to pay attention to distinct spatial features of the object.
We experimentally show that this approach learns to attend to salient spatial features and learns to ignore occluded parts of the object, leading to better pose estimation across datasets.
arXiv Detail & Related papers (2021-01-05T17:18:52Z) - Tiny-YOLO object detection supplemented with geometrical data [0.0]
We propose a method of improving detection precision (mAP) with the help of the prior knowledge about the scene geometry.
We focus our attention on autonomous robots, so given the robot's dimensions and the inclination angles of the camera, it is possible to predict the spatial scale for each pixel of the input frame.
arXiv Detail & Related papers (2020-08-05T14:45:19Z) - Robust 6D Object Pose Estimation by Learning RGB-D Features [59.580366107770764]
We propose a novel discrete-continuous formulation for rotation regression to resolve this local-optimum problem.
We uniformly sample rotation anchors in SO(3), and predict a constrained deviation from each anchor to the target, as well as uncertainty scores for selecting the best prediction.
Experiments on two benchmarks: LINEMOD and YCB-Video, show that the proposed method outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2020-02-29T06:24:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.