Generating 3D Bio-Printable Patches Using Wound Segmentation and
Reconstruction to Treat Diabetic Foot Ulcers
- URL: http://arxiv.org/abs/2203.03814v1
- Date: Tue, 8 Mar 2022 02:29:32 GMT
- Title: Generating 3D Bio-Printable Patches Using Wound Segmentation and
Reconstruction to Treat Diabetic Foot Ulcers
- Authors: Han Joo Chae, Seunghwan Lee, Hyewon Son, Seungyeob Han, Taebin Lim
- Abstract summary: AiD Regen is a system that generates 3D wound models combining 2D semantic segmentation with 3D reconstruction.
AiD Regen seamlessly binds the full pipeline, which includes RGB-D image capturing, semantic segmentation, boundary-guided point-cloud processing, 3D model reconstruction, and 3D printable G-code generation.
- Score: 3.9601033501810576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce AiD Regen, a novel system that generates 3D wound models
combining 2D semantic segmentation with 3D reconstruction so that they can be
printed via 3D bio-printers during the surgery to treat diabetic foot ulcers
(DFUs). AiD Regen seamlessly binds the full pipeline, which includes RGB-D
image capturing, semantic segmentation, boundary-guided point-cloud processing,
3D model reconstruction, and 3D printable G-code generation, into a single
system that can be used out of the box. We developed a multi-stage data
preprocessing method to handle small and unbalanced DFU image datasets. AiD
Regen's human-in-the-loop machine learning interface enables clinicians to not
only create 3D regenerative patches with just a few touch interactions but also
customize and confirm wound boundaries. As evidenced by our experiments, our
model outperforms prior wound segmentation models and our reconstruction
algorithm is capable of generating 3D wound models with compelling accuracy. We
further conducted a case study on a real DFU patient and demonstrated the
effectiveness of AiD Regen in treating DFU wounds.
Related papers
- Creating a Digital Twin of Spinal Surgery: A Proof of Concept [68.37190859183663]
Surgery digitalization is the process of creating a virtual replica of real-world surgery.
We present a proof of concept (PoC) for surgery digitalization that is applied to an ex-vivo spinal surgery.
We employ five RGB-D cameras for dynamic 3D reconstruction of the surgeon, a high-end camera for 3D reconstruction of the anatomy, an infrared stereo camera for surgical instrument tracking, and a laser scanner for 3D reconstruction of the operating room and data fusion.
arXiv Detail & Related papers (2024-03-25T13:09:40Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - Farm3D: Learning Articulated 3D Animals by Distilling 2D Diffusion [67.71624118802411]
We present Farm3D, a method for learning category-specific 3D reconstructors for articulated objects.
We propose a framework that uses an image generator, such as Stable Diffusion, to generate synthetic training data.
Our network can be used for analysis, including monocular reconstruction, or for synthesis, generating articulated assets for real-time applications such as video games.
arXiv Detail & Related papers (2023-04-20T17:59:34Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - Get3DHuman: Lifting StyleGAN-Human into a 3D Generative Model using
Pixel-aligned Reconstruction Priors [56.192682114114724]
Get3DHuman is a novel 3D human framework that can significantly boost the realism and diversity of the generated outcomes.
Our key observation is that the 3D generator can profit from human-related priors learned through 2D human generators and 3D reconstructors.
arXiv Detail & Related papers (2023-02-02T15:37:46Z) - Improved $\alpha$-GAN architecture for generating 3D connected volumes
with an application to radiosurgery treatment planning [0.5156484100374059]
We propose an improved version of 3D $alpha$-GAN for generating connected 3D volumes.
Our model can successfully generate high-quality 3D tumor volumes and associated treatment specifications.
The capability of improved 3D $alpha$-GAN makes it a valuable source for generating synthetic medical image data.
arXiv Detail & Related papers (2022-07-13T16:39:47Z) - Deep Learning-based Framework for Automatic Cranial Defect
Reconstruction and Implant Modeling [0.2020478014317493]
The goal of this work is to propose a robust, fast, and fully automatic method for personalized cranial defect reconstruction and implant modeling.
We propose a two-step deep learning-based method using a modified U-Net architecture to perform the defect reconstruction.
We then propose a dedicated iterative procedure to improve the implant geometry, followed by automatic generation of models ready for 3-D printing.
arXiv Detail & Related papers (2022-04-13T11:33:26Z) - A Point Cloud Generative Model via Tree-Structured Graph Convolutions
for 3D Brain Shape Reconstruction [31.436531681473753]
It is almost impossible to obtain the intraoperative 3D shape information by using physical methods such as sensor scanning.
In this paper, a general generative adversarial network (GAN) architecture is proposed to reconstruct the 3D point clouds (PCs) of brains by using one single 2D image.
arXiv Detail & Related papers (2021-07-21T07:57:37Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.