Cut-and-Approximate: 3D Shape Reconstruction from Planar Cross-sections
with Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2210.12509v1
- Date: Sat, 22 Oct 2022 17:48:12 GMT
- Title: Cut-and-Approximate: 3D Shape Reconstruction from Planar Cross-sections
with Deep Reinforcement Learning
- Authors: Azimkhon Ostonov
- Abstract summary: We present to the best of our knowledge the first 3D shape reconstruction network to solve this task.
Our method is based on applying a Reinforcement Learning algorithm to learn how to effectively parse the shape.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current methods for 3D object reconstruction from a set of planar
cross-sections still struggle to capture detailed topology or require a
considerable number of cross-sections. In this paper, we present, to the best
of our knowledge the first 3D shape reconstruction network to solve this task
which additionally uses orthographic projections of the shape. Our method is
based on applying a Reinforcement Learning algorithm to learn how to
effectively parse the shape using a trial-and-error scheme relying on scalar
rewards. This method cuts a part of a 3D shape in each step which is then
approximated as a polygon mesh. The agent aims to maximize the reward that
depends on the accuracy of surface reconstruction for the approximated parts.
We also consider pre-training of the network for faster learning using
demonstrations generated by a heuristic approach. Experiments show that our
training algorithm which benefits from both imitation learning and also self
exploration, learns efficient policies faster, which results the agent to
produce visually compelling results.
Related papers
- Robust 3D Tracking with Quality-Aware Shape Completion [67.9748164949519]
We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
arXiv Detail & Related papers (2023-12-17T04:50:24Z) - Cross-Dimensional Refined Learning for Real-Time 3D Visual Perception
from Monocular Video [2.2299983745857896]
We present a novel real-time capable learning method that jointly perceives a 3D scene's geometry structure and semantic labels.
We propose an end-to-end cross-dimensional refinement neural network (CDRNet) to extract both 3D mesh and 3D semantic labeling in real time.
arXiv Detail & Related papers (2023-03-16T11:53:29Z) - Semi-Supervised Single-View 3D Reconstruction via Prototype Shape Priors [79.80916315953374]
We propose SSP3D, a semi-supervised framework for 3D reconstruction.
We introduce an attention-guided prototype shape prior module for guiding realistic object reconstruction.
Our approach also performs well when transferring to real-world Pix3D datasets under labeling ratios of 10%.
arXiv Detail & Related papers (2022-09-30T11:19:25Z) - SurFit: Learning to Fit Surfaces Improves Few Shot Learning on Point
Clouds [48.61222927399794]
SurFit is a simple approach for label efficient learning of 3D shape segmentation networks.
It is based on a self-supervised task of decomposing the surface of a 3D shape into geometric primitives.
arXiv Detail & Related papers (2021-12-27T23:55:36Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - Point Discriminative Learning for Unsupervised Representation Learning
on 3D Point Clouds [54.31515001741987]
We propose a point discriminative learning method for unsupervised representation learning on 3D point clouds.
We achieve this by imposing a novel point discrimination loss on the middle level and global level point features.
Our method learns powerful representations and achieves new state-of-the-art performance.
arXiv Detail & Related papers (2021-08-04T15:11:48Z) - Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound
Reconstruction [61.62191904755521]
3D freehand US reconstruction is promising in addressing the problem by providing broad range and freeform scan.
Existing deep learning based methods only focus on the basic cases of skill sequences.
We propose a novel approach to sensorless freehand 3D US reconstruction considering the complex skill sequences.
arXiv Detail & Related papers (2021-07-31T16:06:50Z) - Translational Symmetry-Aware Facade Parsing for 3D Building
Reconstruction [11.263458202880038]
In this paper, we present a novel translational symmetry-based approach to improving the deep neural networks.
We propose a novel scheme to fuse anchor-free detection in a single stage network, which enables the efficient training and better convergence.
We employ an off-the-shelf rendering engine like Blender to reconstruct the realistic high-quality 3D models using procedural modeling.
arXiv Detail & Related papers (2021-06-02T03:10:51Z) - Training Data Generating Networks: Shape Reconstruction via Bi-level
Optimization [52.17872739634213]
We propose a novel 3d shape representation for 3d shape reconstruction from a single image.
We train a network to generate a training set which will be fed into another learning algorithm to define the shape.
arXiv Detail & Related papers (2020-10-16T09:52:13Z) - Deep Geometric Functional Maps: Robust Feature Learning for Shape
Correspondence [31.840880075039944]
We present a novel learning-based approach for computing correspondences between non-rigid 3D shapes.
Key to our method is a feature-extraction network that learns directly from raw shape geometry.
arXiv Detail & Related papers (2020-03-31T15:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.