Learning to Infer Semantic Parameters for 3D Shape Editing
- URL: http://arxiv.org/abs/2011.04755v1
- Date: Mon, 9 Nov 2020 20:58:49 GMT
- Title: Learning to Infer Semantic Parameters for 3D Shape Editing
- Authors: Fangyin Wei, Elena Sizikova, Avneesh Sud, Szymon Rusinkiewicz, Thomas
Funkhouser
- Abstract summary: We learn a deep network that infers the semantic parameters of an input shape and then allows the user to manipulate those parameters.
The network is trained jointly on shapes from an auxiliary synthetic template and unlabeled realistic models.
Experiments with datasets of chairs, airplanes, and human bodies demonstrate that our method produces more natural edits than prior work.
- Score: 14.902766305317202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many applications in 3D shape design and augmentation require the ability to
make specific edits to an object's semantic parameters (e.g., the pose of a
person's arm or the length of an airplane's wing) while preserving as much
existing details as possible. We propose to learn a deep network that infers
the semantic parameters of an input shape and then allows the user to
manipulate those parameters. The network is trained jointly on shapes from an
auxiliary synthetic template and unlabeled realistic models, ensuring
robustness to shape variability while relieving the need to label realistic
exemplars. At testing time, edits within the parameter space drive deformations
to be applied to the original shape, which provides semantically-meaningful
manipulation while preserving the details. This is in contrast to prior methods
that either use autoencoders with a limited latent-space dimensionality,
failing to preserve arbitrary detail, or drive deformations with
purely-geometric controls, such as cages, losing the ability to update local
part regions. Experiments with datasets of chairs, airplanes, and human bodies
demonstrate that our method produces more natural edits than prior work.
Related papers
- Level-Set Parameters: Novel Representation for 3D Shape Analysis [70.23417107911567]
Recent development of neural fields brings in level-set parameters from signed distance functions as a novel, continuous, and numerical representation of 3D shapes.
We establish correlations across different shapes by formulating them as a pseudo-normal distribution, and learn the distribution prior to the respective dataset.
We demonstrate the promise of the novel representations through applications in shape classification, retrieval, and 6D object pose estimation.
arXiv Detail & Related papers (2024-12-18T04:50:19Z) - ShapeFusion: A 3D diffusion model for localized shape editing [37.82690898932135]
We propose an effective diffusion masking training strategy that, by design, facilitates localized manipulation of any shape region.
Compared to the current state-of-the-art our method leads to more interpretable shape manipulations than methods relying on latent code state.
arXiv Detail & Related papers (2024-03-28T18:50:19Z) - Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects [26.102490905989338]
We propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects.
Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields.
Our model can represent shapes with large deformations and can support two typical applications, such as texture transfer and shape editing.
arXiv Detail & Related papers (2023-08-24T06:38:33Z) - ShapeShift: Superquadric-based Object Pose Estimation for Robotic
Grasping [85.38689479346276]
Current techniques heavily rely on a reference 3D object, limiting their generalizability and making it expensive to expand to new object categories.
This paper proposes ShapeShift, a superquadric-based framework for object pose estimation that predicts the object's pose relative to a primitive shape which is fitted to the object.
arXiv Detail & Related papers (2023-04-10T20:55:41Z) - 3DLatNav: Navigating Generative Latent Spaces for Semantic-Aware 3D
Object Manipulation [2.8661021832561757]
3D generative models have been recently successful in generating realistic 3D objects in the form of point clouds.
Most models do not offer controllability to manipulate the shape semantics of component object parts without extensive semantic labels or other reference point clouds.
We propose 3DLatNav; a novel approach to navigating pretrained generative latent spaces to enable controlled part-level semantic manipulation of 3D objects.
arXiv Detail & Related papers (2022-11-17T18:47:56Z) - 3D Neural Sculpting (3DNS): Editing Neural Signed Distance Functions [34.39282814876276]
In this work, we propose the first method for efficient interactive editing of signed distance functions expressed through neural networks.
Inspired by 3D sculpting software for meshes, we use a brush-based framework that is intuitive and can in the future be used by sculptors and digital artists.
arXiv Detail & Related papers (2022-09-28T10:05:16Z) - Learning Visual Shape Control of Novel 3D Deformable Objects from
Partial-View Point Clouds [7.1659268120093635]
Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape.
Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models.
We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the object being manipulated and a point cloud of the goal shape to learn a low-dimensional representation of the object shape.
arXiv Detail & Related papers (2021-10-10T02:34:57Z) - Learning Skeletal Articulations with Neural Blend Shapes [57.879030623284216]
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
arXiv Detail & Related papers (2021-05-06T05:58:13Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - Unsupervised Shape and Pose Disentanglement for 3D Meshes [49.431680543840706]
We present a simple yet effective approach to learn disentangled shape and pose representations in an unsupervised setting.
We use a combination of self-consistency and cross-consistency constraints to learn pose and shape space from registered meshes.
We demonstrate the usefulness of learned representations through a number of tasks including pose transfer and shape retrieval.
arXiv Detail & Related papers (2020-07-22T11:00:27Z) - NiLBS: Neural Inverse Linear Blend Skinning [59.22647012489496]
We introduce a method to invert the deformations undergone via traditional skinning techniques via a neural network parameterized by pose.
The ability to invert these deformations allows values (e.g., distance function, signed distance function, occupancy) to be pre-computed at rest pose, and then efficiently queried when the character is deformed.
arXiv Detail & Related papers (2020-04-06T20:46:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.