Hybrid Neural Diffeomorphic Flow for Shape Representation and Generation
via Triplane
- URL: http://arxiv.org/abs/2307.01957v1
- Date: Tue, 4 Jul 2023 23:28:01 GMT
- Title: Hybrid Neural Diffeomorphic Flow for Shape Representation and Generation
via Triplane
- Authors: Kun Han, Shanlin Sun, Xiaohui Xie
- Abstract summary: HNDF is a method that implicitly learns the underlying representation and decomposes intricate dense correspondences into explicitly axis-aligned triplane features.
Unlike conventional approaches that directly generate new 3D shapes, we explore the idea of shape generation with deformed template shape via diffeomorphic flows.
- Score: 16.684276798449115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Implicit Functions (DIFs) have gained popularity in 3D computer vision
due to their compactness and continuous representation capabilities. However,
addressing dense correspondences and semantic relationships across DIF-encoded
shapes remains a critical challenge, limiting their applications in texture
transfer and shape analysis. Moreover, recent endeavors in 3D shape generation
using DIFs often neglect correspondence and topology preservation. This paper
presents HNDF (Hybrid Neural Diffeomorphic Flow), a method that implicitly
learns the underlying representation and decomposes intricate dense
correspondences into explicitly axis-aligned triplane features. To avoid
suboptimal representations trapped in local minima, we propose hybrid
supervision that captures both local and global correspondences. Unlike
conventional approaches that directly generate new 3D shapes, we further
explore the idea of shape generation with deformed template shape via
diffeomorphic flows, where the deformation is encoded by the generated triplane
features. Leveraging a pre-existing 2D diffusion model, we produce high-quality
and diverse 3D diffeomorphic flows through generated triplanes features,
ensuring topological consistency with the template shape. Extensive experiments
on medical image organ segmentation datasets evaluate the effectiveness of HNDF
in 3D shape representation and generation.
Related papers
- NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - Topology-Aware Latent Diffusion for 3D Shape Generation [20.358373670117537]
We introduce a new generative model that combines latent diffusion with persistent homology to create 3D shapes with high diversity.
Our method involves representing 3D shapes as implicit fields, then employing persistent homology to extract topological features.
arXiv Detail & Related papers (2024-01-31T05:13:53Z) - Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation [54.09274684734721]
We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
arXiv Detail & Related papers (2023-02-01T02:47:53Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation [52.038346313823524]
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
arXiv Detail & Related papers (2022-09-19T02:51:48Z) - Topology-Preserving Shape Reconstruction and Registration via Neural
Diffeomorphic Flow [22.1959666473906]
Deep Implicit Functions (DIFs) represent 3D geometry with continuous signed distance functions learned through deep neural nets.
We propose a new model called Neural Diffeomorphic Flow (NDF) to learn deep implicit shape templates.
NDF achieves consistently state-of-the-art organ shape reconstruction and registration results in both accuracy and quality.
arXiv Detail & Related papers (2022-03-16T14:39:11Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Deep Implicit Templates for 3D Shape Representation [70.9789507686618]
We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
arXiv Detail & Related papers (2020-11-30T06:01:49Z) - Deformed Implicit Field: Modeling 3D Shapes with Learned Dense
Correspondence [30.849927968528238]
We propose a novel Deformed Implicit Field representation for modeling 3D shapes of a category.
Our neural network, dubbed DIF-Net, jointly learns a shape latent space and these fields for 3D objects belonging to a category.
Experiments show that DIF-Net not only produces high-fidelity 3D shapes but also builds high-quality dense correspondences across different shapes.
arXiv Detail & Related papers (2020-11-27T10:45:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.