PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape
Representations
- URL: http://arxiv.org/abs/2008.01639v2
- Date: Fri, 5 Feb 2021 13:32:02 GMT
- Title: PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape
Representations
- Authors: Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollh\"ofer,
Carsten Stoll, Christian Theobalt
- Abstract summary: We present a new mid-level patch-based surface representation for object-agnostic training.
We show several applications of our new representation, including shape and partial point cloud completion.
- Score: 75.42959184226702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit surface representations, such as signed-distance functions, combined
with deep learning have led to impressive models which can represent detailed
shapes of objects with arbitrary topology. Since a continuous function is
learned, the reconstructions can also be extracted at any arbitrary resolution.
However, large datasets such as ShapeNet are required to train such models. In
this paper, we present a new mid-level patch-based surface representation. At
the level of patches, objects across different categories share similarities,
which leads to more generalizable models. We then introduce a novel method to
learn this patch-based representation in a canonical space, such that it is as
object-agnostic as possible. We show that our representation trained on one
category of objects from ShapeNet can also well represent detailed shapes from
any other category. In addition, it can be trained using much fewer shapes,
compared to existing approaches. We show several applications of our new
representation, including shape interpolation and partial point cloud
completion. Due to explicit control over positions, orientations and scales of
patches, our representation is also more controllable compared to object-level
representations, which enables us to deform encoded shapes non-rigidly.
Related papers
- WrappingNet: Mesh Autoencoder via Deep Sphere Deformation [10.934595072086324]
WrappingNet is the first mesh autoencoder enabling general mesh unsupervised learning over heterogeneous objects.
It introduces a novel base graph in the bottleneck dedicated to representing mesh connectivity.
It is shown to facilitate learning a shared latent space representing object shape.
arXiv Detail & Related papers (2023-08-29T16:13:04Z) - Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects [26.102490905989338]
We propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects.
Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields.
Our model can represent shapes with large deformations and can support two typical applications, such as texture transfer and shape editing.
arXiv Detail & Related papers (2023-08-24T06:38:33Z) - Category-level Shape Estimation for Densely Cluttered Objects [94.64287790278887]
We propose a category-level shape estimation method for densely cluttered objects.
Our framework partitions each object in the clutter via the multi-view visual information fusion.
Experiments in the simulated environment and real world show that our method achieves high shape estimation accuracy.
arXiv Detail & Related papers (2023-02-23T13:00:17Z) - PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval
and Deformation [59.70430570779819]
We introduce a data-driven shape completion approach that focuses on completing geometric details of missing regions of 3D shapes.
Our key insight is to copy and deform patches from the partial input to complete missing regions.
We leverage repeating patterns by retrieving patches from the partial input, and learn global structural priors by using a neural network to guide the retrieval and deformation steps.
arXiv Detail & Related papers (2022-07-24T18:59:09Z) - Latent Partition Implicit with Surface Codes for 3D Representation [54.966603013209685]
We introduce a novel implicit representation to represent a single 3D shape as a set of parts in the latent space.
We name our method Latent Partition Implicit (LPI), because of its ability of casting the global shape modeling into multiple local part modeling.
arXiv Detail & Related papers (2022-07-18T14:24:46Z) - CIGMO: Categorical invariant representations in a deep generative
framework [4.111899441919164]
We introduce a novel deep generative model, called CIGMO, that can learn to represent category, shape, and view factors from image data.
By empirical investigation, we show that our model can effectively discover categories of object shapes despite large view variation.
arXiv Detail & Related papers (2022-05-27T04:21:22Z) - Representing Shape Collections with Alignment-Aware Linear Models [17.635846912560627]
We revisit the classical representation of 3D point clouds as linear shape models.
Our key insight is to leverage deep learning to represent a collection of shapes as affine transformations.
arXiv Detail & Related papers (2021-09-03T16:28:34Z) - 3D Object Classification on Partial Point Clouds: A Practical
Perspective [91.81377258830703]
A point cloud is a popular shape representation adopted in 3D object classification.
This paper introduces a practical setting to classify partial point clouds of object instances under any poses.
A novel algorithm in an alignment-classification manner is proposed in this paper.
arXiv Detail & Related papers (2020-12-18T04:00:56Z) - Learning Generative Models of Shape Handles [43.41382075567803]
We present a generative model to synthesize 3D shapes as sets of handles.
Our model can generate handle sets with varying cardinality and different types of handles.
We show that the resulting shape representations are intuitive and achieve superior quality than previous state-of-the-art.
arXiv Detail & Related papers (2020-04-06T22:35:55Z) - DualSDF: Semantic Shape Manipulation using a Two-Level Representation [54.62411904952258]
We propose DualSDF, a representation expressing shapes at two levels of granularity, one capturing fine details and the other representing an abstracted proxy shape.
Our two-level model gives rise to a new shape manipulation technique in which a user can interactively manipulate the coarse proxy shape and see the changes instantly mirrored in the high-resolution shape.
arXiv Detail & Related papers (2020-04-06T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.