Cuboids Revisited: Learning Robust 3D Shape Fitting to Single RGB Images
- URL: http://arxiv.org/abs/2105.02047v1
- Date: Wed, 5 May 2021 13:36:00 GMT
- Title: Cuboids Revisited: Learning Robust 3D Shape Fitting to Single RGB Images
- Authors: Florian Kluger, Hanno Ackermann, Eric Brachmann, Michael Ying Yang,
Bodo Rosenhahn
- Abstract summary: In particular, man-made environments commonly consist of volumetric primitives such as cuboids or cylinders.
Previous approaches directly estimate shape parameters from a 2D or 3D input, and are only able to reproduce simple objects.
We propose a robust estimator for primitive fitting, which can meaningfully abstract real-world environments using cuboids.
- Score: 44.223070672713455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans perceive and construct the surrounding world as an arrangement of
simple parametric models. In particular, man-made environments commonly consist
of volumetric primitives such as cuboids or cylinders. Inferring these
primitives is an important step to attain high-level, abstract scene
descriptions. Previous approaches directly estimate shape parameters from a 2D
or 3D input, and are only able to reproduce simple objects, yet unable to
accurately parse more complex 3D scenes. In contrast, we propose a robust
estimator for primitive fitting, which can meaningfully abstract real-world
environments using cuboids. A RANSAC estimator guided by a neural network fits
these primitives to 3D features, such as a depth map. We condition the network
on previously detected parts of the scene, thus parsing it one-by-one. To
obtain 3D features from a single RGB image, we additionally optimise a feature
extraction CNN in an end-to-end manner. However, naively minimising
point-to-primitive distances leads to large or spurious cuboids occluding parts
of the scene behind. We thus propose an occlusion-aware distance metric
correctly handling opaque scenes. The proposed algorithm does not require
labour-intensive labels, such as cuboid annotations, for training. Results on
the challenging NYU Depth v2 dataset demonstrate that the proposed algorithm
successfully abstracts cluttered real-world 3D scene layouts.
Related papers
- AutoInst: Automatic Instance-Based Segmentation of LiDAR 3D Scans [41.17467024268349]
Making sense of 3D environments requires fine-grained scene understanding.
We propose to predict instance segmentations for 3D scenes in an unsupervised way.
Our approach attains 13.3% higher Average Precision and 9.1% higher F1 score compared to the best-performing baseline.
arXiv Detail & Related papers (2024-03-24T22:53:16Z) - Robust Shape Fitting for 3D Scene Abstraction [33.84212609361491]
In particular, we can describe man-made environments using volumetric primitives such as cuboids or cylinders.
We propose a robust estimator for primitive fitting, which meaningfully abstracts complex real-world environments using cuboids.
Results on the NYU Depth v2 dataset demonstrate that the proposed algorithm successfully abstracts cluttered real-world 3D scene layouts.
arXiv Detail & Related papers (2024-03-15T16:37:43Z) - Sampling is Matter: Point-guided 3D Human Mesh Reconstruction [0.0]
This paper presents a simple yet powerful method for 3D human mesh reconstruction from a single RGB image.
Experimental results on benchmark datasets show that the proposed method efficiently improves the performance of 3D human mesh reconstruction.
arXiv Detail & Related papers (2023-04-19T08:45:26Z) - SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling [75.957103837167]
Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape.
Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch.
arXiv Detail & Related papers (2022-08-14T16:37:51Z) - ShAPO: Implicit Representations for Multi-Object Shape, Appearance, and
Pose Optimization [40.36229450208817]
We present ShAPO, a method for joint multi-object detection, 3D textured reconstruction, 6D object pose and size estimation.
Key to ShAPO is a single-shot pipeline to regress shape, appearance and pose latent codes along with the masks of each object instance.
Our method significantly out-performs all baselines on the NOCS dataset with an 8% absolute improvement in mAP for 6D pose estimation.
arXiv Detail & Related papers (2022-07-27T17:59:31Z) - Neural 3D Scene Reconstruction with the Manhattan-world Assumption [58.90559966227361]
This paper addresses the challenge of reconstructing 3D indoor scenes from multi-view images.
Planar constraints can be conveniently integrated into the recent implicit neural representation-based reconstruction methods.
The proposed method outperforms previous methods by a large margin on 3D reconstruction quality.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - Exploring Deep 3D Spatial Encodings for Large-Scale 3D Scene
Understanding [19.134536179555102]
We propose an alternative approach to overcome the limitations of CNN based approaches by encoding the spatial features of raw 3D point clouds into undirected graph models.
The proposed method achieves on par state-of-the-art accuracy with improved training time and model stability thus indicating strong potential for further research.
arXiv Detail & Related papers (2020-11-29T12:56:19Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - 3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure
Prior [50.73148041205675]
The goal of the Semantic Scene Completion (SSC) task is to simultaneously predict a completed 3D voxel representation of volumetric occupancy and semantic labels of objects in the scene from a single-view observation.
We propose to devise a new geometry-based strategy to embed depth information with low-resolution voxel representation.
Our proposed geometric embedding works better than the depth feature learning from habitual SSC frameworks.
arXiv Detail & Related papers (2020-03-31T09:33:46Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.