Point-Based Shape Representation Generation with a Correspondence-Preserving Diffusion Model
- URL: http://arxiv.org/abs/2508.03925v1
- Date: Tue, 05 Aug 2025 21:36:26 GMT
- Title: Point-Based Shape Representation Generation with a Correspondence-Preserving Diffusion Model
- Authors: Shen Zhu, Yinzhu Jin, Ifrah Zawar, P. Thomas Fletcher,
- Abstract summary: We propose a diffusion model designed to generate point-based shape representations with correspondences.<n>Using shape representation data with correspondences from Open Access Series of Imaging Studies 3 (OASIS-3), we demonstrate that our correspondence-preserving model effectively generates point-based hippocampal shape representations.
- Score: 1.5624421399300303
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a diffusion model designed to generate point-based shape representations with correspondences. Traditional statistical shape models have considered point correspondences extensively, but current deep learning methods do not take them into account, focusing on unordered point clouds instead. Current deep generative models for point clouds do not address generating shapes with point correspondences between generated shapes. This work aims to formulate a diffusion model that is capable of generating realistic point-based shape representations, which preserve point correspondences that are present in the training data. Using shape representation data with correspondences derived from Open Access Series of Imaging Studies 3 (OASIS-3), we demonstrate that our correspondence-preserving model effectively generates point-based hippocampal shape representations that are highly realistic compared to existing methods. We further demonstrate the applications of our generative model by downstream tasks, such as conditional generation of healthy and AD subjects and predicting morphological changes of disease progression by counterfactual generation.
Related papers
- An End-to-End Deep Learning Generative Framework for Refinable Shape
Matching and Generation [45.820901263103806]
Generative modelling for shapes is a prerequisite for In-Silico Clinical Trials (ISCTs)
We develop a novel unsupervised geometric deep-learning model to establish refinable shape correspondences in a latent space.
We extend our proposed base model to a joint shape generative-clustering multi-atlas framework to incorporate further variability.
arXiv Detail & Related papers (2024-03-10T21:33:53Z) - Make-A-Shape: a Ten-Million-scale 3D Shape Model [52.701745578415796]
This paper introduces Make-A-Shape, a new 3D generative model designed for efficient training on a vast scale.
We first innovate a wavelet-tree representation to compactly encode shapes by formulating the subband coefficient filtering scheme.
We derive the subband adaptive training strategy to train our model to effectively learn to generate coarse and detail wavelet coefficients.
arXiv Detail & Related papers (2024-01-20T00:21:58Z) - Surf-D: Generating High-Quality Surfaces of Arbitrary Topologies Using Diffusion Models [83.35835521670955]
Surf-D is a novel method for generating high-quality 3D shapes as Surfaces with arbitrary topologies.
We use the Unsigned Distance Field (UDF) as our surface representation to accommodate arbitrary topologies.
We also propose a new pipeline that employs a point-based AutoEncoder to learn a compact and continuous latent space for accurately encoding UDF.
arXiv Detail & Related papers (2023-11-28T18:56:01Z) - Hybrid Neural Diffeomorphic Flow for Shape Representation and Generation
via Triplane [16.684276798449115]
HNDF is a method that implicitly learns the underlying representation and decomposes intricate dense correspondences into explicitly axis-aligned triplane features.
Unlike conventional approaches that directly generate new 3D shapes, we explore the idea of shape generation with deformed template shape via diffeomorphic flows.
arXiv Detail & Related papers (2023-07-04T23:28:01Z) - Zero-Shot 3D Shape Correspondence [67.18775201037732]
We propose a novel zero-shot approach to computing correspondences between 3D shapes.
We exploit the exceptional reasoning capabilities of recent foundation models in language and vision.
Our approach produces highly plausible results in a zero-shot manner, especially between strongly non-isometric shapes.
arXiv Detail & Related papers (2023-06-05T21:14:23Z) - NAISR: A 3D Neural Additive Model for Interpretable Shape Representation [10.284366517948929]
We propose a 3D Neural Additive Model for Interpretable Shape Representation ($textt NAISR$) for scientific shape discovery.
Our approach captures shape population trends and allows for patient-specific predictions through shape transfer.
Our experiments demonstrate that $textitStarman$ achieves excellent shape reconstruction performance while retaining interpretability.
arXiv Detail & Related papers (2023-03-16T11:18:04Z) - Controllable Mesh Generation Through Sparse Latent Point Diffusion
Models [105.83595545314334]
We design a novel sparse latent point diffusion model for mesh generation.
Our key insight is to regard point clouds as an intermediate representation of meshes, and model the distribution of point clouds instead.
Our proposed sparse latent point diffusion model achieves superior performance in terms of generation quality and controllability.
arXiv Detail & Related papers (2023-03-14T14:25:29Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation [52.038346313823524]
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
arXiv Detail & Related papers (2022-09-19T02:51:48Z) - Landmark-free Statistical Shape Modeling via Neural Flow Deformations [0.5897108307012394]
We present FlowSSM, a novel shape modeling approach that learns shape variability without requiring dense correspondence between training instances.
Our model outperforms state-of-the-art methods in providing an expressive and robust shape prior for distal femur and liver.
arXiv Detail & Related papers (2022-09-14T18:17:19Z) - Autoregressive 3D Shape Generation via Canonical Mapping [92.91282602339398]
transformers have shown remarkable performances in a variety of generative tasks such as image, audio, and text generation.
In this paper, we aim to further exploit the power of transformers and employ them for the task of 3D point cloud generation.
Our model can be easily extended to multi-modal shape completion as an application for conditional shape generation.
arXiv Detail & Related papers (2022-04-05T03:12:29Z) - Functional additive regression on shape and form manifolds of planar
curves [0.0]
We define shape and form as equivalence classes under translation, rotation and -- for shapes -- also scale.
We extend generalized additive regression to models for the shape/form of planar curves or landmark.
arXiv Detail & Related papers (2021-09-06T17:43:32Z) - Deep Implicit Templates for 3D Shape Representation [70.9789507686618]
We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
arXiv Detail & Related papers (2020-11-30T06:01:49Z) - Discrete Point Flow Networks for Efficient Point Cloud Generation [36.03093265136374]
Generative models have proven effective at modeling 3D shapes and their statistical variations.
We introduce a latent variable model that builds on normalizing flows to generate 3D point clouds of an arbitrary size.
For single-view shape reconstruction we also obtain results on par with state-of-the-art voxel, point cloud, and mesh-based methods.
arXiv Detail & Related papers (2020-07-20T14:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.