Advancing Wound Filling Extraction on 3D Faces: Auto-Segmentation and
Wound Face Regeneration Approach
- URL: http://arxiv.org/abs/2307.01844v3
- Date: Thu, 13 Jul 2023 01:35:18 GMT
- Title: Advancing Wound Filling Extraction on 3D Faces: Auto-Segmentation and
Wound Face Regeneration Approach
- Authors: Duong Q. Nguyen and Thinh D. Le and Phuong D. Nguyen and Nga T.K. Le
and H. Nguyen-Xuan
- Abstract summary: We propose an efficient approach for automating 3D facial wound segmentation using a two-stream graph convolutional network.
Based on the segmentation model, we propose an improved approach for extracting 3D facial wound fillers.
Our method achieved a remarkable accuracy of 0.9999986% on the test suite, surpassing the performance of the previous method.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial wound segmentation plays a crucial role in preoperative planning and
optimizing patient outcomes in various medical applications. In this paper, we
propose an efficient approach for automating 3D facial wound segmentation using
a two-stream graph convolutional network. Our method leverages the Cir3D-FaIR
dataset and addresses the challenge of data imbalance through extensive
experimentation with different loss functions. To achieve accurate
segmentation, we conducted thorough experiments and selected a high-performing
model from the trained models. The selected model demonstrates exceptional
segmentation performance for complex 3D facial wounds. Furthermore, based on
the segmentation model, we propose an improved approach for extracting 3D
facial wound fillers and compare it to the results of the previous study. Our
method achieved a remarkable accuracy of 0.9999986\% on the test suite,
surpassing the performance of the previous method. From this result, we use 3D
printing technology to illustrate the shape of the wound filling. The outcomes
of this study have significant implications for physicians involved in
preoperative planning and intervention design. By automating facial wound
segmentation and improving the accuracy of wound-filling extraction, our
approach can assist in carefully assessing and optimizing interventions,
leading to enhanced patient outcomes. Additionally, it contributes to advancing
facial reconstruction techniques by utilizing machine learning and 3D
bioprinting for printing skin tissue implants. Our source code is available at
\url{https://github.com/SIMOGroup/WoundFilling3D}.
Related papers
- 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - Deep Learning-Based Brain Image Segmentation for Automated Tumour Detection [0.0]
The objective is to leverage state-of-the-art convolutional neural networks (CNNs) on a large dataset of brain MRI scans for segmentation.
The proposed methodology applies pre-processing techniques for enhanced performance and generalizability.
arXiv Detail & Related papers (2024-04-06T15:09:49Z) - Creating a Digital Twin of Spinal Surgery: A Proof of Concept [68.37190859183663]
Surgery digitalization is the process of creating a virtual replica of real-world surgery.
We present a proof of concept (PoC) for surgery digitalization that is applied to an ex-vivo spinal surgery.
We employ five RGB-D cameras for dynamic 3D reconstruction of the surgeon, a high-end camera for 3D reconstruction of the anatomy, an infrared stereo camera for surgical instrument tracking, and a laser scanner for 3D reconstruction of the operating room and data fusion.
arXiv Detail & Related papers (2024-03-25T13:09:40Z) - Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - Application of Self-Supervised Learning to MICA Model for Reconstructing
Imperfect 3D Facial Structures [0.05999777817331315]
We present an innovative method for regenerating flawed facial structures, yielding 3D printable outputs.
Our results highlight the model's capacity for concealing scars and achieving comprehensive facial reconstructions without discernible scarring.
arXiv Detail & Related papers (2023-04-08T16:13:30Z) - 3D Facial Imperfection Regeneration: Deep learning approach and 3D
printing prototypes [0.0]
This study explores the potential of a fully convolutional mesh autoencoder model for regenerating 3D nature faces with the presence of imperfect areas.
We utilize deep learning approaches in graph processing and analysis to investigate the capabilities model in recreating a filling part for facial scars.
arXiv Detail & Related papers (2023-03-25T07:12:33Z) - Generating 3D Bio-Printable Patches Using Wound Segmentation and
Reconstruction to Treat Diabetic Foot Ulcers [3.9601033501810576]
AiD Regen is a system that generates 3D wound models combining 2D semantic segmentation with 3D reconstruction.
AiD Regen seamlessly binds the full pipeline, which includes RGB-D image capturing, semantic segmentation, boundary-guided point-cloud processing, 3D model reconstruction, and 3D printable G-code generation.
arXiv Detail & Related papers (2022-03-08T02:29:32Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - 3D Dense Geometry-Guided Facial Expression Synthesis by Adversarial
Learning [54.24887282693925]
We propose a novel framework to exploit 3D dense (depth and surface normals) information for expression manipulation.
We use an off-the-shelf state-of-the-art 3D reconstruction model to estimate the depth and create a large-scale RGB-Depth dataset.
Our experiments demonstrate that the proposed method outperforms the competitive baseline and existing arts by a large margin.
arXiv Detail & Related papers (2020-09-30T17:12:35Z) - Multi-Scale Supervised 3D U-Net for Kidneys and Kidney Tumor
Segmentation [0.8397730500554047]
We present a multi-scale supervised 3D U-Net, MSS U-Net, to automatically segment kidneys and kidney tumors from CT images.
Our architecture combines deep supervision with exponential logarithmic loss to increase the 3D U-Net training efficiency.
This architecture shows superior performance compared to state-of-the-art works using data from KiTS19 public dataset.
arXiv Detail & Related papers (2020-04-17T08:25:43Z) - Automatic Data Augmentation via Deep Reinforcement Learning for
Effective Kidney Tumor Segmentation [57.78765460295249]
We develop a novel automatic learning-based data augmentation method for medical image segmentation.
In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss.
We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.
arXiv Detail & Related papers (2020-02-22T14:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.