GFPNet: A Deep Network for Learning Shape Completion in Generic Fitted
Primitives
- URL: http://arxiv.org/abs/2006.02098v1
- Date: Wed, 3 Jun 2020 08:29:27 GMT
- Title: GFPNet: A Deep Network for Learning Shape Completion in Generic Fitted
Primitives
- Authors: Tiberiu Cocias, Alexandru Razvant and Sorin Grigorescu
- Abstract summary: We propose an object reconstruction apparatus that uses the so-called Generic Primitives (GP) to complete shapes.
We show that GFPNet competes with state of the art shape completion methods by providing performance results on the ModelNet and KITTI benchmarking datasets.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose an object reconstruction apparatus that uses the
so-called Generic Primitives (GP) to complete shapes. A GP is a 3D point cloud
depicting a generalized shape of a class of objects. To reconstruct the objects
in a scene we first fit a GP onto each occluded object to obtain an initial raw
structure. Secondly, we use a model-based deformation technique to fold the
surface of the GP over the occluded object. The deformation model is encoded
within the layers of a Deep Neural Network (DNN), coined GFPNet. The objective
of the network is to transfer the particularities of the object from the scene
to the raw volume represented by the GP. We show that GFPNet competes with
state of the art shape completion methods by providing performance results on
the ModelNet and KITTI benchmarking datasets.
Related papers
- KP-RED: Exploiting Semantic Keypoints for Joint 3D Shape Retrieval and Deformation [87.23575166061413]
KP-RED is a unified KeyPoint-driven REtrieval and Deformation framework.
It takes object scans as input and jointly retrieves and deforms the most geometrically similar CAD models.
arXiv Detail & Related papers (2024-03-15T08:44:56Z) - Learning Self-Prior for Mesh Inpainting Using Self-Supervised Graph Convolutional Networks [4.424836140281846]
We present a self-prior-based mesh inpainting framework that requires only an incomplete mesh as input.
Our method maintains the polygonal mesh format throughout the inpainting process.
We demonstrate that our method outperforms traditional dataset-independent approaches.
arXiv Detail & Related papers (2023-05-01T02:51:38Z) - ARO-Net: Learning Implicit Fields from Anchored Radial Observations [25.703496065476067]
We introduce anchored radial observations (ARO), a novel shape encoding for learning implicit field representation of 3D shapes.
We develop a general and unified shape representation by employing a fixed set of anchors, via Fibonacci sampling, and designing a coordinate-based deep neural network.
We demonstrate the quality and generality of our network, coined ARO-Net, on surface reconstruction from sparse point clouds.
arXiv Detail & Related papers (2022-12-19T16:29:20Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - GASCN: Graph Attention Shape Completion Network [4.307812758854162]
Shape completion is the problem of inferring the complete geometry of an object given a partial point cloud.
This paper proposes the Graph Attention Shape Completion Network (GASCN), a novel neural network model that solves this problem.
For each completed point, our model infers the extent of the local surface patch which is used to produce dense yet precise shape completions.
arXiv Detail & Related papers (2022-01-20T01:03:00Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - PIG-Net: Inception based Deep Learning Architecture for 3D Point Cloud
Segmentation [0.9137554315375922]
We propose a inception based deep network architecture called PIG-Net, that effectively characterizes the local and global geometric details of the point clouds.
We perform an exhaustive experimental analysis of the PIG-Net architecture on two state-of-the-art datasets.
arXiv Detail & Related papers (2021-01-28T13:27:55Z) - 3D Object Classification on Partial Point Clouds: A Practical
Perspective [91.81377258830703]
A point cloud is a popular shape representation adopted in 3D object classification.
This paper introduces a practical setting to classify partial point clouds of object instances under any poses.
A novel algorithm in an alignment-classification manner is proposed in this paper.
arXiv Detail & Related papers (2020-12-18T04:00:56Z) - Geometry Constrained Weakly Supervised Object Localization [55.17224813345206]
We propose a geometry constrained network, termed GC-Net, for weakly supervised object localization.
The detector predicts the object location defined by a set of coefficients describing a geometric shape.
The generator takes the resulting masked images as input and performs two complementary classification tasks for the object and background.
In contrast to previous approaches, GC-Net is trained end-to-end and predict object location without any post-processing.
arXiv Detail & Related papers (2020-07-19T17:33:42Z) - Shape Prior Deformation for Categorical 6D Object Pose and Size
Estimation [62.618227434286]
We present a novel learning approach to recover the 6D poses and sizes of unseen object instances from an RGB-D image.
We propose a deep network to reconstruct the 3D object model by explicitly modeling the deformation from a pre-learned categorical shape prior.
arXiv Detail & Related papers (2020-07-16T16:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.