Towards Confidence-guided Shape Completion for Robotic Applications
- URL: http://arxiv.org/abs/2209.04300v1
- Date: Fri, 9 Sep 2022 13:48:24 GMT
- Title: Towards Confidence-guided Shape Completion for Robotic Applications
- Authors: Andrea Rosasco, Stefano Berti, Fabrizio Bottarel, Michele
Colledanchise and Lorenzo Natale
- Abstract summary: Deep learning has begun taking traction as effective means of inferring a complete 3D object representation from partial visual data.
We propose an object shape completion method based on an implicit 3D representation providing a confidence value for each reconstructed point.
We experimentally validate our approach by comparing reconstructed shapes with ground truths, and by deploying our shape completion algorithm in a robotic grasping pipeline.
- Score: 6.940242990198
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many robotic tasks involving some form of 3D visual perception greatly
benefit from a complete knowledge of the working environment. However, robots
often have to tackle unstructured environments and their onboard visual sensors
can only provide incomplete information due to limited workspaces, clutter or
object self-occlusion. In recent years, deep learning architectures for shape
completion have begun taking traction as effective means of inferring a
complete 3D object representation from partial visual data. Nevertheless, most
of the existing state-of-the-art approaches provide a fixed output resolution
in the form of voxel grids, strictly related to the size of the neural network
output stage. While this is enough for some tasks, e.g. obstacle avoidance in
navigation, grasping and manipulation require finer resolutions and simply
scaling up the neural network outputs is computationally expensive. In this
paper, we address this limitation by proposing an object shape completion
method based on an implicit 3D representation providing a confidence value for
each reconstructed point. As a second contribution, we propose a gradient-based
method for efficiently sampling such implicit function at an arbitrary
resolution, tunable at inference time. We experimentally validate our approach
by comparing reconstructed shapes with ground truths, and by deploying our
shape completion algorithm in a robotic grasping pipeline. In both cases, we
compare results with a state-of-the-art shape completion approach.
Related papers
- Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Robust 3D Tracking with Quality-Aware Shape Completion [67.9748164949519]
We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
arXiv Detail & Related papers (2023-12-17T04:50:24Z) - PaintNet: Unstructured Multi-Path Learning from 3D Point Clouds for
Robotic Spray Painting [13.182797149468204]
Industrial robotic problems such as spray painting and welding require planning of multiple trajectories to solve the task.
Existing solutions make strong assumptions on the form of input surfaces and the nature of output paths.
By leveraging on recent advances in 3D deep learning, we introduce a novel framework capable of dealing with arbitrary 3D surfaces.
arXiv Detail & Related papers (2022-11-13T15:41:50Z) - Uncertainty Guided Policy for Active Robotic 3D Reconstruction using
Neural Radiance Fields [82.21033337949757]
This paper introduces a ray-based volumetric uncertainty estimator, which computes the entropy of the weight distribution of the color samples along each ray of the object's implicit neural representation.
We show that it is possible to infer the uncertainty of the underlying 3D geometry given a novel view with the proposed estimator.
We present a next-best-view selection policy guided by the ray-based volumetric uncertainty in neural radiance fields-based representations.
arXiv Detail & Related papers (2022-09-17T21:28:57Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - Extending DeepSDF for automatic 3D shape retrieval and similarity
transform estimation [3.8213230386700614]
Recent advances in computer graphics and computer vision have found successful application of deep neural network models for 3D shapes.
We present a formulation to overcome this issue by jointly estimating shape and similarity transform parameters.
arXiv Detail & Related papers (2020-04-20T04:28:45Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.