Implicit Shape Modeling for Anatomical Structure Refinement of
Volumetric Medical Images
- URL: http://arxiv.org/abs/2312.06164v2
- Date: Sat, 6 Jan 2024 13:35:07 GMT
- Title: Implicit Shape Modeling for Anatomical Structure Refinement of
Volumetric Medical Images
- Authors: Minghui Zhang, Hanxiao Zhang, Xin You, Guang-Zhong Yang, Yun Gu
- Abstract summary: We propose a unified framework for 3D shape modelling and segmentation refinement based on implicit neural networks.
For improved shape representation, implicit shape constraints are used for both instances and latent templates.
Experiments on validation datasets involving liver, pancreas and lung segmentation demonstrate the superiority of our approach.
- Score: 29.894934602946567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shape modeling of volumetric data is essential for medical image analysis and
computer-aided intervention. In practice, automated shape reconstruction cannot
always achieve satisfactory results due to limited image resolution and a lack
of sufficiently detailed shape priors used as constraints. In this paper, a
unified framework is proposed for 3D shape modelling and segmentation
refinement based on implicit neural networks. To learn a sharable shape prior
from different instances within the same category during training, physical
details of volumetric data are firstly used to construct Physical-Informed
Continuous Coordinate Transform (PICCT) for implicit shape modeling. For
improved shape representation, implicit shape constraints based on Signed
Distance Function (SDF) are used for both instances and latent templates. For
inference, a Template Interaction Module (TIM) is proposed to refine 3D shapes
produced by Convolutional Neural Networks (CNNs) via deforming deep implicit
templates with latent codes. Experimental results on validation datasets
involving liver, pancreas and lung segmentation demonstrate the superiority of
our approach in shape refinement and reconstruction. The Chamfer Distance/Earth
Mover's Distance achieved by the proposed method are 0.232/0.087 for the Liver
dataset, 0.128/0.069 for the Pancreas dataset, and 0.417/0.100 for the Lung
Lobe dataset, respectively.
Related papers
- Canonical Pose Reconstruction from Single Depth Image for 3D Non-rigid Pose Recovery on Limited Datasets [55.84702107871358]
3D reconstruction from 2D inputs, especially for non-rigid objects like humans, presents unique challenges.<n>Traditional methods often struggle with non-rigid shapes, which require extensive training data to cover the entire deformation space.<n>This study proposes a canonical pose reconstruction model that transforms single-view depth images of deformable shapes into a canonical form.
arXiv Detail & Related papers (2025-05-23T14:58:34Z) - Attention-based Shape-Deformation Networks for Artifact-Free Geometry Reconstruction of Lumbar Spine from MR Images [1.4249943098958722]
We present $textitUNet-DeformSA$ and $textitTransDeformer$: novel attention-based deep neural networks that reconstruct the geometry of the lumbar spine with high spatial accuracy and mesh correspondence across patients.
Experiment results show that our networks generate artifact-free geometry outputs, and the variant of $textitTransDeformer$ can predict the errors of a reconstructed geometry.
arXiv Detail & Related papers (2024-03-30T03:23:52Z) - Deep Medial Voxels: Learned Medial Axis Approximations for Anatomical Shape Modeling [5.584193645582203]
We introduce deep medial voxels, a semi-implicit representation that faithfully approximates the topological skeleton from imaging volumes.
Our reconstruction technique shows potential for both visualization and computer simulations.
arXiv Detail & Related papers (2024-03-18T13:47:18Z) - An End-to-End Deep Learning Generative Framework for Refinable Shape
Matching and Generation [45.820901263103806]
Generative modelling for shapes is a prerequisite for In-Silico Clinical Trials (ISCTs)
We develop a novel unsupervised geometric deep-learning model to establish refinable shape correspondences in a latent space.
We extend our proposed base model to a joint shape generative-clustering multi-atlas framework to incorporate further variability.
arXiv Detail & Related papers (2024-03-10T21:33:53Z) - Mesh2SSM: From Surface Meshes to Statistical Shape Models of Anatomy [0.0]
We propose Mesh2SSM, a new approach that leverages unsupervised, permutation-invariant representation learning to estimate how to deform a template point cloud to subject-specific meshes.
Mesh2SSM can also learn a population-specific template, reducing any bias due to template selection.
arXiv Detail & Related papers (2023-05-13T00:03:59Z) - A Generative Shape Compositional Framework to Synthesise Populations of
Virtual Chimaeras [52.33206865588584]
We introduce a generative shape model for complex anatomical structures, learnable from datasets of unpaired datasets.
We build virtual chimaeras from databases of whole-heart shape assemblies that each contribute samples for heart substructures.
Our approach significantly outperforms a PCA-based shape model (trained with complete data) in terms of generalisability and specificity.
arXiv Detail & Related papers (2022-10-04T13:36:52Z) - Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes [52.038346313823524]
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T08:32:57Z) - Topology-Preserving Shape Reconstruction and Registration via Neural
Diffeomorphic Flow [22.1959666473906]
Deep Implicit Functions (DIFs) represent 3D geometry with continuous signed distance functions learned through deep neural nets.
We propose a new model called Neural Diffeomorphic Flow (NDF) to learn deep implicit shape templates.
NDF achieves consistently state-of-the-art organ shape reconstruction and registration results in both accuracy and quality.
arXiv Detail & Related papers (2022-03-16T14:39:11Z) - Deep Implicit Templates for 3D Shape Representation [70.9789507686618]
We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
arXiv Detail & Related papers (2020-11-30T06:01:49Z) - Discriminative and Generative Models for Anatomical Shape Analysison
Point Clouds with Deep Neural Networks [3.7814216736076434]
We introduce deep neural networks for the analysis of anatomical shapes that learn a low-dimensional shape representation from the given task.
Our framework is modular and consists of several computing blocks that perform fundamental shape processing tasks.
We propose a discriminative model for disease classification and age regression, as well as a generative model for the accruate reconstruction of shapes.
arXiv Detail & Related papers (2020-10-02T07:37:40Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.