Point Cloud Diffusion Models for Automatic Implant Generation
- URL: http://arxiv.org/abs/2303.08061v2
- Date: Mon, 10 Jul 2023 16:30:19 GMT
- Title: Point Cloud Diffusion Models for Automatic Implant Generation
- Authors: Paul Friedrich, Julia Wolleb, Florentin Bieder, Florian M. Thieringer
and Philippe C. Cattin
- Abstract summary: We propose a novel approach for implant generation based on a combination of 3D point cloud diffusion models and voxelization networks.
We evaluate our method on the SkullBreak and SkullFix datasets, generating high-quality implants and achieving competitive evaluation scores.
- Score: 0.4499833362998487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in 3D printing of biocompatible materials make patient-specific
implants increasingly popular. The design of these implants is, however, still
a tedious and largely manual process. Existing approaches to automate implant
generation are mainly based on 3D U-Net architectures on downsampled or
patch-wise data, which can result in a loss of detail or contextual
information. Following the recent success of Diffusion Probabilistic Models, we
propose a novel approach for implant generation based on a combination of 3D
point cloud diffusion models and voxelization networks. Due to the stochastic
sampling process in our diffusion model, we can propose an ensemble of
different implants per defect, from which the physicians can choose the most
suitable one. We evaluate our method on the SkullBreak and SkullFix datasets,
generating high-quality implants and achieving competitive evaluation scores.
Related papers
- Improving Deep Learning-based Automatic Cranial Defect Reconstruction by Heavy Data Augmentation: From Image Registration to Latent Diffusion Models [0.2911706166691895]
The work is a considerable contribution to the field of artificial intelligence in the automatic modeling of personalized cranial implants.
We show that the use of heavy data augmentation significantly increases both the quantitative and qualitative outcomes.
We also show that the synthetically augmented network successfully reconstructs real clinical defects.
arXiv Detail & Related papers (2024-06-10T15:34:23Z) - Zero123-6D: Zero-shot Novel View Synthesis for RGB Category-level 6D Pose Estimation [66.3814684757376]
This work presents Zero123-6D, the first work to demonstrate the utility of Diffusion Model-based novel-view-synthesizers in enhancing RGB 6D pose estimation at category-level.
The outlined method shows reduction in data requirements, removal of the necessity of depth information in zero-shot category-level 6D pose estimation task, and increased performance, quantitatively demonstrated through experiments on the CO3D dataset.
arXiv Detail & Related papers (2024-03-21T10:38:18Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - Bayesian Diffusion Models for 3D Shape Reconstruction [54.69889488052155]
We present a prediction algorithm that performs effective Bayesian inference by tightly coupling the top-down (prior) information with the bottom-up (data-driven) procedure.
We show the effectiveness of BDM on the 3D shape reconstruction task.
arXiv Detail & Related papers (2024-03-11T17:55:53Z) - 3DTopia: Large Text-to-3D Generation Model with Hybrid Diffusion Priors [85.11117452560882]
We present a two-stage text-to-3D generation system, namely 3DTopia, which generates high-quality general 3D assets within 5 minutes using hybrid diffusion priors.
The first stage samples from a 3D diffusion prior directly learned from 3D data. Specifically, it is powered by a text-conditioned tri-plane latent diffusion model, which quickly generates coarse 3D samples for fast prototyping.
The second stage utilizes 2D diffusion priors to further refine the texture of coarse 3D models from the first stage. The refinement consists of both latent and pixel space optimization for high-quality texture generation
arXiv Detail & Related papers (2024-03-04T17:26:28Z) - Learn to Optimize Denoising Scores for 3D Generation: A Unified and
Improved Diffusion Prior on NeRF and 3D Gaussian Splatting [60.393072253444934]
We propose a unified framework aimed at enhancing the diffusion priors for 3D generation tasks.
We identify a divergence between the diffusion priors and the training procedures of diffusion models that substantially impairs the quality of 3D generation.
arXiv Detail & Related papers (2023-12-08T03:55:34Z) - TCSloT: Text Guided 3D Context and Slope Aware Triple Network for Dental
Implant Position Prediction [27.020346431680355]
In implant prosthesis treatment, the surgical guide of implant is used to ensure accurate implantation.
Deep neural network has been proposed to assist the dentist in locating the implant position.
In this paper, we design a Text Guided 3D Context and Slope Aware Triple Network (TCSloT)
arXiv Detail & Related papers (2023-08-10T05:51:21Z) - ImplantFormer: Vision Transformer based Implant Position Regression
Using Dental CBCT Data [27.020346431680355]
Implant prosthesis is the most appropriate treatment for dentition defect or dentition loss, which usually involves a surgical guide design process to decide the implant position.
In this paper, a transformer-based Implant Position Regression Network, ImplantFormer, is proposed to automatically predict the implant position based on the oral CBCT data.
We creatively propose to predict the implant position using the 2D axial view of the tooth crown area and fit a centerline of the implant to obtain the actual implant position at the tooth root.
arXiv Detail & Related papers (2022-10-29T02:31:27Z) - Deep Learning-based Framework for Automatic Cranial Defect
Reconstruction and Implant Modeling [0.2020478014317493]
The goal of this work is to propose a robust, fast, and fully automatic method for personalized cranial defect reconstruction and implant modeling.
We propose a two-step deep learning-based method using a modified U-Net architecture to perform the defect reconstruction.
We then propose a dedicated iterative procedure to improve the implant geometry, followed by automatic generation of models ready for 3-D printing.
arXiv Detail & Related papers (2022-04-13T11:33:26Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z) - An Online Platform for Automatic Skull Defect Restoration and Cranial
Implant Design [0.5551220224568872]
The system automatically restores the missing part of a skull and generates the desired implant.
The generated implant can be downloaded in the STereoLithography (.stl) format directly via the browser interface of the system.
The implant model can then be sent to a 3D printer for in loco implant manufacturing.
arXiv Detail & Related papers (2020-06-01T14:41:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.