NeuForm: Adaptive Overfitting for Neural Shape Editing
- URL: http://arxiv.org/abs/2207.08890v1
- Date: Mon, 18 Jul 2022 19:00:14 GMT
- Title: NeuForm: Adaptive Overfitting for Neural Shape Editing
- Authors: Connor Z. Lin, Niloy J. Mitra, Gordon Wetzstein, Leonidas Guibas, Paul
Guerrero
- Abstract summary: We propose NEUFORM to combine the advantages of both overfitted and generalizable representations by adaptively using the one most appropriate for each shape region.
We demonstrate edits that successfully reconfigure parts of human-designed shapes, such as chairs, tables, and lamps.
We compare with two state-of-the-art competitors and demonstrate clear improvements in terms of plausibility and fidelity of the resultant edits.
- Score: 67.16151288720677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural representations are popular for representing shapes, as they can be
learned form sensor data and used for data cleanup, model completion, shape
editing, and shape synthesis. Current neural representations can be categorized
as either overfitting to a single object instance, or representing a collection
of objects. However, neither allows accurate editing of neural scene
representations: on the one hand, methods that overfit objects achieve highly
accurate reconstructions, but do not generalize to unseen object configurations
and thus cannot support editing; on the other hand, methods that represent a
family of objects with variations do generalize but produce only approximate
reconstructions. We propose NEUFORM to combine the advantages of both
overfitted and generalizable representations by adaptively using the one most
appropriate for each shape region: the overfitted representation where reliable
data is available, and the generalizable representation everywhere else. We
achieve this with a carefully designed architecture and an approach that blends
the network weights of the two representations, avoiding seams and other
artifacts. We demonstrate edits that successfully reconfigure parts of
human-designed shapes, such as chairs, tables, and lamps, while preserving
semantic integrity and the accuracy of an overfitted shape representation. We
compare with two state-of-the-art competitors and demonstrate clear
improvements in terms of plausibility and fidelity of the resultant edits.
Related papers
- DeFormer: Integrating Transformers with Deformable Models for 3D Shape
Abstraction from a Single Image [31.154786931081087]
We propose a novel bi-channel Transformer architecture, integrated with parameterized deformable models, to simultaneously estimate the global and local deformations of primitives.
DeFormer achieves better reconstruction accuracy over the state-of-the-art, and visualizes with consistent semantic correspondences for improved interpretability.
arXiv Detail & Related papers (2023-09-22T02:46:43Z) - Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects [26.102490905989338]
We propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects.
Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields.
Our model can represent shapes with large deformations and can support two typical applications, such as texture transfer and shape editing.
arXiv Detail & Related papers (2023-08-24T06:38:33Z) - ANISE: Assembly-based Neural Implicit Surface rEconstruction [12.745433575962842]
We present ANISE, a method that reconstructs a 3Dshape from partial observations (images or sparse point clouds)
The shape is formulated as an assembly of neural implicit functions, each representing a different part instance.
We demonstrate that, when performing reconstruction by decoding part representations into implicit functions, our method achieves state-of-the-art part-aware reconstruction results from both images and sparse point clouds.
arXiv Detail & Related papers (2022-05-27T00:01:40Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape
Representations [75.42959184226702]
We present a new mid-level patch-based surface representation for object-agnostic training.
We show several applications of our new representation, including shape and partial point cloud completion.
arXiv Detail & Related papers (2020-08-04T15:34:46Z) - DualSDF: Semantic Shape Manipulation using a Two-Level Representation [54.62411904952258]
We propose DualSDF, a representation expressing shapes at two levels of granularity, one capturing fine details and the other representing an abstracted proxy shape.
Our two-level model gives rise to a new shape manipulation technique in which a user can interactively manipulate the coarse proxy shape and see the changes instantly mirrored in the high-resolution shape.
arXiv Detail & Related papers (2020-04-06T17:59:15Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z) - Convolutional Occupancy Networks [88.48287716452002]
We propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes.
By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space.
We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
arXiv Detail & Related papers (2020-03-10T10:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.