AI-enabled Automatic Multimodal Fusion of Cone-Beam CT and Intraoral
Scans for Intelligent 3D Tooth-Bone Reconstruction and Clinical Applications
- URL: http://arxiv.org/abs/2203.05784v1
- Date: Fri, 11 Mar 2022 07:50:15 GMT
- Title: AI-enabled Automatic Multimodal Fusion of Cone-Beam CT and Intraoral
Scans for Intelligent 3D Tooth-Bone Reconstruction and Clinical Applications
- Authors: Jin Hao, Jiaxiang Liu, Jin Li, Wei Pan, Ruizhe Chen, Huimin Xiong,
Kaiwei Sun, Hangzheng Lin, Wanlu Liu, Wanghui Ding, Jianfei Yang, Haoji Hu,
Yueling Zhang, Yang Feng, Zeyu Zhao, Huikai Wu, Youyi Zheng, Bing Fang,
Zuozhu Liu, Zhihe Zhao
- Abstract summary: A critical step in virtual dental treatment planning is to accurately delineate all tooth-bone structures from CBCT.
Previous studies have established several methods for CBCT segmentation using deep learning.
Here, we present a Deep Dental Multimodal Analysis framework consisting of a CBCT segmentation model, an intraoral scan (IOS) segmentation model, and a fusion model to generate 3D fused crown-root-bone structures.
- Score: 29.065668174732014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A critical step in virtual dental treatment planning is to accurately
delineate all tooth-bone structures from CBCT with high fidelity and accurate
anatomical information. Previous studies have established several methods for
CBCT segmentation using deep learning. However, the inherent resolution
discrepancy of CBCT and the loss of occlusal and dentition information largely
limited its clinical applicability. Here, we present a Deep Dental Multimodal
Analysis (DDMA) framework consisting of a CBCT segmentation model, an intraoral
scan (IOS) segmentation model (the most accurate digital dental model), and a
fusion model to generate 3D fused crown-root-bone structures with high fidelity
and accurate occlusal and dentition information. Our model was trained with a
large-scale dataset with 503 CBCT and 28,559 IOS meshes manually annotated by
experienced human experts. For CBCT segmentation, we use a five-fold cross
validation test, each with 50 CBCT, and our model achieves an average Dice
coefficient and IoU of 93.99% and 88.68%, respectively, significantly
outperforming the baselines. For IOS segmentations, our model achieves an mIoU
of 93.07% and 95.70% on the maxillary and mandible on a test set of 200 IOS
meshes, which are 1.77% and 3.52% higher than the state-of-art method. Our DDMA
framework takes about 20 to 25 minutes to generate the fused 3D mesh model
following the sequential processing order, compared to over 5 hours by human
experts. Notably, our framework has been incorporated into a software by a
clear aligner manufacturer, and real-world clinical cases demonstrate that our
model can visualize crown-root-bone structures during the entire orthodontic
treatment and can predict risks like dehiscence and fenestration. These
findings demonstrate the potential of multi-modal deep learning to improve the
quality of digital dental models and help dentists make better clinical
decisions.
Related papers
- Instance Segmentation and Teeth Classification in Panoramic X-rays [35.8246552579468]
Teeth segmentation and recognition are critical in various dental applications and dental diagnosis.
This article offers a pipeline of two deep learning models, U-Net and YOLOv8, which results in BB-UNet.
We have improved the quality and reliability of teeth segmentation by utilising the YOLOv8 and U-Net capabilities.
arXiv Detail & Related papers (2024-06-06T04:57:29Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - TSegFormer: 3D Tooth Segmentation in Intraoral Scans with Geometry
Guided Transformer [47.18526074157094]
Optical Intraoral Scanners (IOSs) are widely used in digital dentistry to provide detailed 3D information of dental crowns and the gingiva.
Previous methods are error-prone at complicated boundaries and exhibit unsatisfactory results across patients.
We propose TSegFormer which captures both local and global dependencies among different teeth and the gingiva in the IOS point clouds with a multi-task 3D transformer architecture.
arXiv Detail & Related papers (2023-11-22T08:45:01Z) - A Deep Learning Approach to Teeth Segmentation and Orientation from
Panoramic X-rays [1.7366868394060984]
We present a comprehensive approach to teeth segmentation and orientation from panoramic X-ray images, leveraging deep learning techniques.
We build our model based on FUSegNet, a popular model originally developed for wound segmentation.
We introduce oriented bounding box (OBB) generation through principal component analysis (PCA) for precise tooth orientation estimation.
arXiv Detail & Related papers (2023-10-26T06:01:25Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - TFormer: 3D Tooth Segmentation in Mesh Scans with Geometry Guided
Transformer [37.47317212620463]
Optical Intra-oral Scanners (IOS) are widely used in digital dentistry, providing 3-Dimensional (3D) and high-resolution geometrical information of dental crowns and the gingiva.
Previous methods are error-prone in complicated tooth-tooth or tooth-gingiva boundaries, and usually exhibit unsatisfactory results across various patients.
We propose a novel method based on 3D transformer architectures that is evaluated with large-scale and high-resolution 3D IOS datasets.
arXiv Detail & Related papers (2022-10-29T15:20:54Z) - CNN-based fully automatic wrist cartilage volume quantification in MR
Image [55.41644538483948]
The U-net convolutional neural network with additional attention layers provides the best wrist cartilage segmentation performance.
The error of cartilage volume measurement should be assessed independently using a non-MRI method.
arXiv Detail & Related papers (2022-06-22T14:19:06Z) - Advancing COVID-19 Diagnosis with Privacy-Preserving Collaboration in
Artificial Intelligence [79.038671794961]
We launch the Unified CT-COVID AI Diagnostic Initiative (UCADI), where the AI model can be distributedly trained and independently executed at each host institution.
Our study is based on 9,573 chest computed tomography scans (CTs) from 3,336 patients collected from 23 hospitals located in China and the UK.
arXiv Detail & Related papers (2021-11-18T00:43:41Z) - Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and
Landmark Localization on 3D Intraoral Scans [56.55092443401416]
emphiMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.953pm0.076$, significantly outperforming the original MeshSegNet.
PointNet-Reg achieved a mean absolute error (MAE) of $0.623pm0.718, mm$ in distances between the prediction and ground truth for $44$ landmarks, which is superior compared with other networks for landmark detection.
arXiv Detail & Related papers (2021-09-24T13:00:26Z) - A fully automated method for 3D individual tooth identification and
segmentation in dental CBCT [1.567576360103422]
This paper proposes a fully automated method of identifying and segmenting 3D individual teeth from dental CBCT images.
The proposed method addresses the aforementioned difficulty by developing a deep learning-based hierarchical multi-step model.
Experimental results showed that the proposed method achieved an F1-score of 93.35% for tooth identification and a Dice similarity coefficient of 94.79% for individual 3D tooth segmentation.
arXiv Detail & Related papers (2021-02-11T15:07:23Z) - Fully Automated and Standardized Segmentation of Adipose Tissue
Compartments by Deep Learning in Three-dimensional Whole-body MRI of
Epidemiological Cohort Studies [11.706960468832301]
Quantification and localization of different adipose tissue compartments from whole-body MR images is of high interest to examine metabolic conditions.
We propose a 3D convolutional neural network (DCNet) to provide a robust and objective segmentation.
Fast (5-7seconds) and reliable adipose tissue segmentation can be obtained with high Dice overlap.
arXiv Detail & Related papers (2020-08-05T17:30:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.