DualSDF: Semantic Shape Manipulation using a Two-Level Representation
- URL: http://arxiv.org/abs/2004.02869v1
- Date: Mon, 6 Apr 2020 17:59:15 GMT
- Title: DualSDF: Semantic Shape Manipulation using a Two-Level Representation
- Authors: Zekun Hao, Hadar Averbuch-Elor, Noah Snavely, Serge Belongie
- Abstract summary: We propose DualSDF, a representation expressing shapes at two levels of granularity, one capturing fine details and the other representing an abstracted proxy shape.
Our two-level model gives rise to a new shape manipulation technique in which a user can interactively manipulate the coarse proxy shape and see the changes instantly mirrored in the high-resolution shape.
- Score: 54.62411904952258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We are seeing a Cambrian explosion of 3D shape representations for use in
machine learning. Some representations seek high expressive power in capturing
high-resolution detail. Other approaches seek to represent shapes as
compositions of simple parts, which are intuitive for people to understand and
easy to edit and manipulate. However, it is difficult to achieve both fidelity
and interpretability in the same representation. We propose DualSDF, a
representation expressing shapes at two levels of granularity, one capturing
fine details and the other representing an abstracted proxy shape using simple
and semantically consistent shape primitives. To achieve a tight coupling
between the two representations, we use a variational objective over a shared
latent space. Our two-level model gives rise to a new shape manipulation
technique in which a user can interactively manipulate the coarse proxy shape
and see the changes instantly mirrored in the high-resolution shape. Moreover,
our model actively augments and guides the manipulation towards producing
semantically meaningful shapes, making complex manipulations possible with
minimal user input.
Related papers
- DeFormer: Integrating Transformers with Deformable Models for 3D Shape
Abstraction from a Single Image [31.154786931081087]
We propose a novel bi-channel Transformer architecture, integrated with parameterized deformable models, to simultaneously estimate the global and local deformations of primitives.
DeFormer achieves better reconstruction accuracy over the state-of-the-art, and visualizes with consistent semantic correspondences for improved interpretability.
arXiv Detail & Related papers (2023-09-22T02:46:43Z) - Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects [26.102490905989338]
We propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects.
Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields.
Our model can represent shapes with large deformations and can support two typical applications, such as texture transfer and shape editing.
arXiv Detail & Related papers (2023-08-24T06:38:33Z) - SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation [89.47132156950194]
We present a novel framework built to simplify 3D asset generation for amateur users.
Our method supports a variety of input modalities that can be easily provided by a human.
Our model can combine all these tasks into one swiss-army-knife tool.
arXiv Detail & Related papers (2022-12-08T18:59:05Z) - NeuForm: Adaptive Overfitting for Neural Shape Editing [67.16151288720677]
We propose NEUFORM to combine the advantages of both overfitted and generalizable representations by adaptively using the one most appropriate for each shape region.
We demonstrate edits that successfully reconfigure parts of human-designed shapes, such as chairs, tables, and lamps.
We compare with two state-of-the-art competitors and demonstrate clear improvements in terms of plausibility and fidelity of the resultant edits.
arXiv Detail & Related papers (2022-07-18T19:00:14Z) - Deep Implicit Templates for 3D Shape Representation [70.9789507686618]
We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
arXiv Detail & Related papers (2020-11-30T06:01:49Z) - Learning to Caricature via Semantic Shape Transform [95.25116681761142]
We propose an algorithm based on a semantic shape transform to produce shape exaggerations.
We show that the proposed framework is able to render visually pleasing shape exaggerations while maintaining their facial structures.
arXiv Detail & Related papers (2020-08-12T03:41:49Z) - PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape
Representations [75.42959184226702]
We present a new mid-level patch-based surface representation for object-agnostic training.
We show several applications of our new representation, including shape and partial point cloud completion.
arXiv Detail & Related papers (2020-08-04T15:34:46Z) - Learning Generative Models of Shape Handles [43.41382075567803]
We present a generative model to synthesize 3D shapes as sets of handles.
Our model can generate handle sets with varying cardinality and different types of handles.
We show that the resulting shape representations are intuitive and achieve superior quality than previous state-of-the-art.
arXiv Detail & Related papers (2020-04-06T22:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.