SPAGHETTI: Editing Implicit Shapes Through Part Aware Generation
- URL: http://arxiv.org/abs/2201.13168v1
- Date: Mon, 31 Jan 2022 12:31:41 GMT
- Title: SPAGHETTI: Editing Implicit Shapes Through Part Aware Generation
- Authors: Amir Hertz, Or Perel, Raja Giryes, Olga Sorkine-Hornung and Daniel
Cohen-Or
- Abstract summary: We introduce a method for $mathbfE$diting $mathbfI$mplicit $mathbfS$hapes $mathbfT$hrough.
Our architecture allows for manipulation of implicit shapes by means of transforming, interpolating and combining shape segments together.
- Score: 85.09014441196692
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural implicit fields are quickly emerging as an attractive representation
for learning based techniques. However, adopting them for 3D shape modeling and
editing is challenging. We introduce a method for $\mathbf{E}$diting
$\mathbf{I}$mplicit $\mathbf{S}$hapes $\mathbf{T}$hrough $\mathbf{P}$art
$\mathbf{A}$ware $\mathbf{G}$enera$\mathbf{T}$ion, permuted in short as
SPAGHETTI. Our architecture allows for manipulation of implicit shapes by means
of transforming, interpolating and combining shape segments together, without
requiring explicit part supervision. SPAGHETTI disentangles shape part
representation into extrinsic and intrinsic geometric information. This
characteristic enables a generative framework with part-level control. The
modeling capabilities of SPAGHETTI are demonstrated using an interactive
graphical interface, where users can directly edit neural implicit shapes.
Related papers
- Zero-Shot 3D Shape Correspondence [67.18775201037732]
We propose a novel zero-shot approach to computing correspondences between 3D shapes.
We exploit the exceptional reasoning capabilities of recent foundation models in language and vision.
Our approach produces highly plausible results in a zero-shot manner, especially between strongly non-isometric shapes.
arXiv Detail & Related papers (2023-06-05T21:14:23Z) - NAISR: A 3D Neural Additive Model for Interpretable Shape Representation [10.284366517948929]
We propose a 3D Neural Additive Model for Interpretable Shape Representation ($textt NAISR$) for scientific shape discovery.
Our approach captures shape population trends and allows for patient-specific predictions through shape transfer.
Our experiments demonstrate that $textitStarman$ achieves excellent shape reconstruction performance while retaining interpretability.
arXiv Detail & Related papers (2023-03-16T11:18:04Z) - Unsupervised Discovery of Semantic Latent Directions in Diffusion Models [6.107812768939554]
We present an unsupervised method to discover interpretable editing directions for the latent variables $mathbfx_t in mathcalX$ of DMs.
The discovered semantic latent directions mostly yield disentangled attribute changes, and they are globally consistent across different samples.
arXiv Detail & Related papers (2023-02-24T05:54:34Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - Latent Partition Implicit with Surface Codes for 3D Representation [54.966603013209685]
We introduce a novel implicit representation to represent a single 3D shape as a set of parts in the latent space.
We name our method Latent Partition Implicit (LPI), because of its ability of casting the global shape modeling into multiple local part modeling.
arXiv Detail & Related papers (2022-07-18T14:24:46Z) - Learning Smooth Neural Functions via Lipschitz Regularization [92.42667575719048]
We introduce a novel regularization designed to encourage smooth latent spaces in neural fields.
Compared with prior Lipschitz regularized networks, ours is computationally fast and can be implemented in four lines of code.
arXiv Detail & Related papers (2022-02-16T21:24:54Z) - DeepCurrents: Learning Implicit Representations of Shapes with
Boundaries [25.317812435426216]
We propose a hybrid shape representation that combines explicit boundary curves with implicit learned interiors.
We further demonstrate learning families of shapes jointly parameterized by boundary curves and latent codes.
arXiv Detail & Related papers (2021-11-17T20:34:20Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Learners' languages [0.0]
Authors show that the fundamental elements of deep learning -- gradient descent and backpropagation -- can be conceptualized as a strong monoidal functor.
We show that a map $Ato B$ in $mathbfPara(mathbfSLens)$ has a natural interpretation in terms of dynamical systems.
arXiv Detail & Related papers (2021-03-01T18:34:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.