Deep Learning-based Framework for Automatic Cranial Defect
Reconstruction and Implant Modeling
- URL: http://arxiv.org/abs/2204.06310v1
- Date: Wed, 13 Apr 2022 11:33:26 GMT
- Title: Deep Learning-based Framework for Automatic Cranial Defect
Reconstruction and Implant Modeling
- Authors: Marek Wodzinski, Mateusz Daniol, Miroslaw Socha, Daria Hemmerling,
Maciej Stanuch, Andrzej Skalski
- Abstract summary: The goal of this work is to propose a robust, fast, and fully automatic method for personalized cranial defect reconstruction and implant modeling.
We propose a two-step deep learning-based method using a modified U-Net architecture to perform the defect reconstruction.
We then propose a dedicated iterative procedure to improve the implant geometry, followed by automatic generation of models ready for 3-D printing.
- Score: 0.2020478014317493
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The goal of this work is to propose a robust, fast, and fully automatic
method for personalized cranial defect reconstruction and implant modeling.
We propose a two-step deep learning-based method using a modified U-Net
architecture to perform the defect reconstruction, and a dedicated iterative
procedure to improve the implant geometry, followed by automatic generation of
models ready for 3-D printing. We propose a cross-case augmentation based on
imperfect image registration combining cases from different datasets. We
perform ablation studies regarding different augmentation strategies and
compare them to other state-of-the-art methods.
We evaluate the method on three datasets introduced during the AutoImplant
2021 challenge, organized jointly with the MICCAI conference. We perform the
quantitative evaluation using the Dice and boundary Dice coefficients, and the
Hausdorff distance. The average Dice coefficient, boundary Dice coefficient,
and the 95th percentile of Hausdorff distance are 0.91, 0.94, and 1.53 mm
respectively. We perform an additional qualitative evaluation by 3-D printing
and visualization in mixed reality to confirm the implant's usefulness.
We propose a complete pipeline that enables one to create the cranial implant
model ready for 3-D printing. The described method is a greatly extended
version of the method that scored 1st place in all AutoImplant 2021 challenge
tasks. We freely release the source code, that together with the open datasets,
makes the results fully reproducible. The automatic reconstruction of cranial
defects may enable manufacturing personalized implants in a significantly
shorter time, possibly allowing one to perform the 3-D printing process
directly during a given intervention. Moreover, we show the usability of the
defect reconstruction in mixed reality that may further reduce the surgery
time.
Related papers
- Automatic Cranial Defect Reconstruction with Self-Supervised Deep Deformable Masked Autoencoders [0.12301374769426145]
Thousands of people suffer from cranial injuries every year. They require personalized implants that need to be designed and manufactured before the reconstruction surgery.
The problem can be formulated as volumetric shape completion and solved by deep neural networks dedicated to supervised image segmentation.
In our work, we propose an alternative and simple approach to use a self-supervised masked autoencoder to solve the problem.
arXiv Detail & Related papers (2024-04-19T14:43:43Z) - Binarized 3D Whole-body Human Mesh Recovery [104.13364878565737]
We propose a Binarized Dual Residual Network (BiDRN) to estimate the 3D human body, face, and hands parameters efficiently.
BiDRN achieves comparable performance with full-precision method Hand4Whole while using just 22.1% parameters and 14.8% operations.
arXiv Detail & Related papers (2023-11-24T07:51:50Z) - Advancing Wound Filling Extraction on 3D Faces: Auto-Segmentation and
Wound Face Regeneration Approach [0.0]
We propose an efficient approach for automating 3D facial wound segmentation using a two-stream graph convolutional network.
Based on the segmentation model, we propose an improved approach for extracting 3D facial wound fillers.
Our method achieved a remarkable accuracy of 0.9999986% on the test suite, surpassing the performance of the previous method.
arXiv Detail & Related papers (2023-07-04T17:46:02Z) - Point Cloud Diffusion Models for Automatic Implant Generation [0.4499833362998487]
We propose a novel approach for implant generation based on a combination of 3D point cloud diffusion models and voxelization networks.
We evaluate our method on the SkullBreak and SkullFix datasets, generating high-quality implants and achieving competitive evaluation scores.
arXiv Detail & Related papers (2023-03-14T16:54:59Z) - NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction [64.36535692191343]
Implicit neural representations have shown compelling results in offline 3D reconstruction and also recently demonstrated the potential for online SLAM systems.
This paper addresses two key challenges: 1) seeking a criterion to measure the quality of the candidate viewpoints for the view planning based on the new representations, and 2) learning the criterion from data that can generalize to different scenes instead of hand-crafting one.
Our method demonstrates significant improvements on various metrics for the rendered image quality and the geometry quality of the reconstructed 3D models when compared with variants using TSDF or reconstruction without view planning.
arXiv Detail & Related papers (2022-07-22T10:05:36Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - Generating 3D Bio-Printable Patches Using Wound Segmentation and
Reconstruction to Treat Diabetic Foot Ulcers [3.9601033501810576]
AiD Regen is a system that generates 3D wound models combining 2D semantic segmentation with 3D reconstruction.
AiD Regen seamlessly binds the full pipeline, which includes RGB-D image capturing, semantic segmentation, boundary-guided point-cloud processing, 3D model reconstruction, and 3D printable G-code generation.
arXiv Detail & Related papers (2022-03-08T02:29:32Z) - A Proof-of-Concept Study of Artificial Intelligence Assisted Contour
Revision [4.195764918318819]
We present a novel concept called artificial intelligence assisted contour revision (AIACR)
The concept uses deep learning (DL) models to assist clinicians in revising contours in an efficient and effective way.
We demonstrated its feasibility by using 2D axial CT images from three head-and-neck cancer datasets.
arXiv Detail & Related papers (2021-07-28T16:18:29Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - PerMO: Perceiving More at Once from a Single Image for Autonomous
Driving [76.35684439949094]
We present a novel approach to detect, segment, and reconstruct complete textured 3D models of vehicles from a single image.
Our approach combines the strengths of deep learning and the elegance of traditional techniques.
We have integrated these algorithms with an autonomous driving system.
arXiv Detail & Related papers (2020-07-16T05:02:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.