AI-Dentify: Deep learning for proximal caries detection on bitewing x-ray -- HUNT4 Oral Health Study
- URL: http://arxiv.org/abs/2310.00354v3
- Date: Fri, 22 Mar 2024 10:36:47 GMT
- Title: AI-Dentify: Deep learning for proximal caries detection on bitewing x-ray -- HUNT4 Oral Health Study
- Authors: Javier Pérez de Frutos, Ragnhild Holden Helland, Shreya Desai, Line Cathrine Nymoen, Thomas Langø, Theodor Remman, Abhijit Sen,
- Abstract summary: The use of artificial intelligence has the potential to aid in the diagnosis by providing a quick and informative analysis of the bitewing images.
A dataset of 13,887 bitewings from the HUNT4 Oral Health Study was annotated individually by six different experts.
A consensus dataset of 197 images, annotated jointly by the same six dentist, was used for evaluation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: Dental caries diagnosis requires the manual inspection of diagnostic bitewing images of the patient, followed by a visual inspection and probing of the identified dental pieces with potential lesions. Yet the use of artificial intelligence, and in particular deep-learning, has the potential to aid in the diagnosis by providing a quick and informative analysis of the bitewing images. Methods: A dataset of 13,887 bitewings from the HUNT4 Oral Health Study were annotated individually by six different experts, and used to train three different object detection deep-learning architectures: RetinaNet (ResNet50), YOLOv5 (M size), and EfficientDet (D0 and D1 sizes). A consensus dataset of 197 images, annotated jointly by the same six dentist, was used for evaluation. A five-fold cross validation scheme was used to evaluate the performance of the AI models. Results: he trained models show an increase in average precision and F1-score, and decrease of false negative rate, with respect to the dental clinicians. When compared against the dental clinicians, the YOLOv5 model shows the largest improvement, reporting 0.647 mean average precision, 0.548 mean F1-score, and 0.149 mean false negative rate. Whereas the best annotators on each of these metrics reported 0.299, 0.495, and 0.164 respectively. Conclusion: Deep-learning models have shown the potential to assist dental professionals in the diagnosis of caries. Yet, the task remains challenging due to the artifacts natural to the bitewing images.
Related papers
- DentalX: Context-Aware Dental Disease Detection with Radiographs [44.3806898357896]
Diagnosing dental diseases from radiographs is time-consuming and challenging due to the subtle nature of diagnostic evidence.<n>Existing methods, which rely on object detection models, struggle to detect dental diseases that present with far less visual support.<n>We propose bf DentalX, a novel context-aware dental disease detection approach.
arXiv Detail & Related papers (2026-01-13T18:32:28Z) - An Explainable Hybrid AI Framework for Enhanced Tuberculosis and Symptom Detection [55.35661671061754]
Tuberculosis remains a critical global health issue, particularly in resource-limited and remote areas.<n>We propose a framework which enhances disease and symptom detection on chest X-rays by integrating two supervised heads and a self-supervised head.<n>Our model achieves an accuracy of 98.85% for distinguishing between COVID-19, tuberculosis, and normal cases, and a macro-F1 score of 90.09% for multilabel symptom detection.
arXiv Detail & Related papers (2025-10-21T17:18:55Z) - DentVLM: A Multimodal Vision-Language Model for Comprehensive Dental Diagnosis and Enhanced Clinical Practice [71.62725911420627]
We introduce DentVLM, a vision-language model engineered for expert-level oral disease diagnosis.<n>The model is capable of interpreting seven 2D oral imaging modalities across 36 diagnostic tasks.<n>It surpassed the diagnostic performance of 13 junior dentists on 21 of 36 tasks and exceeded that of 12 senior dentists on 12 of 36 tasks.
arXiv Detail & Related papers (2025-09-27T14:47:37Z) - Adapting Foundation Model for Dental Caries Detection with Dual-View Co-Training [53.77904429789069]
We present Attention-TNet, a novel Dual-View Co-Training network for accurate dental caries detection.<n>OurTNet starts with employing automated tooth detection to establish two complementary views: a global view from panoramic X-ray images and a local view from cropped tooth images.<n>To effectively integrate information from both views, we introduce a Gated Cross-View module.
arXiv Detail & Related papers (2025-08-28T14:13:26Z) - Advanced Deep Learning Techniques for Classifying Dental Conditions Using Panoramic X-Ray Images [0.0]
This study investigates deep learning methods for automated classification of dental conditions in panoramic X-ray images.<n>Three approaches were evaluated: a custom convolutional neural network (CNN), hybrid models combining CNN feature extraction with traditional classifiers, and fine-tuned pre-trained architectures.<n>Results show that hybrid models improve discrimination of morphologically similar conditions and provide efficient, reliable performance.
arXiv Detail & Related papers (2025-08-27T04:52:50Z) - Segmentation of Mental Foramen in Orthopantomographs: A Deep Learning Approach [1.9193578733126382]
This study aims to accelerate dental procedures, elevating patient care and healthcare efficiency in dentistry.
This research used Deep Learning methods to accurately detect and segment the Mental Foramen from panoramic radiograph images.
arXiv Detail & Related papers (2024-08-08T21:40:06Z) - TeethDreamer: 3D Teeth Reconstruction from Five Intra-oral Photographs [45.0864129371874]
We propose a 3D teeth reconstruction framework, named TeethDreamer, to restore the shape and position of the upper and lower teeth.
Given five intra-oral photographs, our approach first leverages a large diffusion model's prior knowledge to generate novel multi-view images.
To ensure the 3D consistency across generated views, we integrate a 3D-aware feature attention mechanism in the reverse diffusion process.
arXiv Detail & Related papers (2024-07-16T06:24:32Z) - OralBBNet: Spatially Guided Dental Segmentation of Panoramic X-Rays with Bounding Box Priors [34.82692226532414]
OralBBNet is designed to improve the accuracy and robustness of tooth classification and segmentation on panoramic X-rays.<n>Our approach achieved a 1-3% improvement in mean average precision (mAP) for tooth detection compared to existing techniques.<n>Results of this study establish a foundation for the wider implementation of object detection models in dental diagnostics.
arXiv Detail & Related papers (2024-06-06T04:57:29Z) - A Sequential Framework for Detection and Classification of Abnormal
Teeth in Panoramic X-rays [1.8962225869778402]
This paper describes our solution for the Dentalteethion and Diagnosis on Panoramic X-rays Challenge at MICCAI 2023.
Our approach consists of a multi-step framework tailored to the task of detecting and classifying abnormal teeth.
arXiv Detail & Related papers (2023-08-31T13:47:01Z) - Diagnosing Human-object Interaction Detectors [42.283857276076596]
We introduce a diagnosis toolbox to provide detailed quantitative break-down analysis of HOI detection models.
We analyze eight state-of-the-art HOI detection models and provide valuable diagnosis insights to foster future research.
arXiv Detail & Related papers (2023-08-16T17:39:15Z) - Generative Adversarial Networks for Dental Patient Identity Protection
in Orthodontic Educational Imaging [0.0]
This research introduces a novel area-preserving Generative Adversarial Networks (GAN) inversion technique for effectively de-identifying dental patient images.
This innovative method addresses privacy concerns while preserving key dental features, thereby generating valuable resources for dental education and research.
arXiv Detail & Related papers (2023-07-05T04:14:57Z) - Construction of unbiased dental template and parametric dental model for
precision digital dentistry [46.459289444783956]
We develop an unbiased dental template by constructing an accurate dental atlas from CBCT images with guidance of teeth segmentation.
A total of 159 CBCT images of real subjects are collected to perform the constructions.
arXiv Detail & Related papers (2023-04-07T09:39:03Z) - Self-Supervised Learning with Masked Image Modeling for Teeth Numbering,
Detection of Dental Restorations, and Instance Segmentation in Dental
Panoramic Radiographs [8.397847537464534]
This study aims to utilize recent self-supervised learning methods like SimMIM and UM-MAE to increase the model efficiency and understanding of the limited number of dental radiographs.
To the best of our knowledge, this is the first study that applied self-supervised learning methods to Swin Transformer on dental panoramic radiographs.
arXiv Detail & Related papers (2022-10-20T16:50:07Z) - Forensic Dental Age Estimation Using Modified Deep Learning Neural
Network [0.0]
This study proposed an automated approach to estimate the forensic ages of individuals ranging in age from 8 to 68 using 1,332 DPR images.
The performance metrics of the results were as follows: mean absolute error (MAE) was 3.13, root mean square error (RMSE) was 4.77, and correlation coefficient R$2$ was 87%.
arXiv Detail & Related papers (2022-08-21T04:06:04Z) - OdontoAI: A human-in-the-loop labeled data set and an online platform to
boost research on dental panoramic radiographs [53.67409169790872]
This study addresses the construction of a public data set of dental panoramic radiographs.
We benefit from the human-in-the-loop (HITL) concept to expedite the labeling procedure.
Results demonstrate a 51% labeling time reduction using HITL, saving us more than 390 continuous working hours.
arXiv Detail & Related papers (2022-03-29T18:57:23Z) - 3D Structural Analysis of the Optic Nerve Head to Robustly Discriminate
Between Papilledema and Optic Disc Drusen [44.754910718620295]
We developed a deep learning algorithm to identify major tissue structures of the optic nerve head (ONH) in 3D optical coherence tomography ( OCT) scans.
A classification algorithm was designed using 150 OCT volumes to perform 3-class classifications (1: ODD, 2: papilledema, 3: healthy) strictly from their drusen and prelamina swelling scores.
Our AI approach accurately discriminated ODD from papilledema, using a single OCT scan.
arXiv Detail & Related papers (2021-12-18T17:05:53Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - Systematic Clinical Evaluation of A Deep Learning Method for Medical
Image Segmentation: Radiosurgery Application [48.89674088331313]
We systematically evaluate a Deep Learning (DL) method in a 3D medical image segmentation task.
Our method is integrated into the radiosurgery treatment process and directly impacts the clinical workflow.
arXiv Detail & Related papers (2021-08-21T16:15:40Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.