OdontoAI: A human-in-the-loop labeled data set and an online platform to
boost research on dental panoramic radiographs
- URL: http://arxiv.org/abs/2203.15856v1
- Date: Tue, 29 Mar 2022 18:57:23 GMT
- Title: OdontoAI: A human-in-the-loop labeled data set and an online platform to
boost research on dental panoramic radiographs
- Authors: Bernardo Silva, La\'is Pinheiro, Brenda Sobrinho, Fernanda Lima, Bruna
Sobrinho, Kalyf Abdalla, Matheus Pithon, Patr\'icia Cury, Luciano Oliveira
- Abstract summary: This study addresses the construction of a public data set of dental panoramic radiographs.
We benefit from the human-in-the-loop (HITL) concept to expedite the labeling procedure.
Results demonstrate a 51% labeling time reduction using HITL, saving us more than 390 continuous working hours.
- Score: 53.67409169790872
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning has remarkably advanced in the last few years, supported by
large labeled data sets. These data sets are precious yet scarce because of the
time-consuming labeling procedures, discouraging researchers from producing
them. This scarcity is especially true in dentistry, where deep learning
applications are still in an embryonic stage. Motivated by this background, we
address in this study the construction of a public data set of dental panoramic
radiographs. Our objects of interest are the teeth, which are segmented and
numbered, as they are the primary targets for dentists when screening a
panoramic radiograph. We benefited from the human-in-the-loop (HITL) concept to
expedite the labeling procedure, using predictions from deep neural networks as
provisional labels, later verified by human annotators. All the gathering and
labeling procedures of this novel data set is thoroughly analyzed. The results
were consistent and behaved as expected: At each HITL iteration, the model
predictions improved. Our results demonstrated a 51% labeling time reduction
using HITL, saving us more than 390 continuous working hours. In a novel online
platform, called OdontoAI, created to work as task central for this novel data
set, we released 4,000 images, from which 2,000 have their labels publicly
available for model fitting. The labels of the other 2,000 images are private
and used for model evaluation considering instance and semantic segmentation
and numbering. To the best of our knowledge, this is the largest-scale publicly
available data set for panoramic radiographs, and the OdontoAI is the first
platform of its kind in dentistry.
Related papers
- Detection Transformer for Teeth Detection, Segmentation, and Numbering
in Oral Rare Diseases: Focus on Data Augmentation and Inpainting Techniques [0.0]
In this work, we focused on deep learning image processing in the context of oral rare diseases.
We used a dataset consisting of 156 panoramic radiographs from individuals with rare oral diseases and labeled by experts.
We trained the Detection Transformer (DETR) neural network for teeth detection, segmentation, and numbering the 52 teeth classes.
arXiv Detail & Related papers (2024-02-06T21:07:09Z) - Diffusion Facial Forgery Detection [56.69763252655695]
This paper introduces DiFF, a comprehensive dataset dedicated to face-focused diffusion-generated images.
We conduct extensive experiments on the DiFF dataset via a human test and several representative forgery detection methods.
The results demonstrate that the binary detection accuracy of both human observers and automated detectors often falls below 30%.
arXiv Detail & Related papers (2024-01-29T03:20:19Z) - TSegFormer: 3D Tooth Segmentation in Intraoral Scans with Geometry
Guided Transformer [47.18526074157094]
Optical Intraoral Scanners (IOSs) are widely used in digital dentistry to provide detailed 3D information of dental crowns and the gingiva.
Previous methods are error-prone at complicated boundaries and exhibit unsatisfactory results across patients.
We propose TSegFormer which captures both local and global dependencies among different teeth and the gingiva in the IOS point clouds with a multi-task 3D transformer architecture.
arXiv Detail & Related papers (2023-11-22T08:45:01Z) - YOLOrtho -- A Unified Framework for Teeth Enumeration and Dental Disease
Detection [4.136033167469768]
YOLOrtho is a unified framework for teeth enumeration and dental disease detection.
We develop our model on Dentex Challenge 2023 data, which consists of three distinct types of annotated data.
To fully utilize the data and learn both teeth detection and disease identification simultaneously, we formulate diseases as attributes attached to their corresponding teeth.
arXiv Detail & Related papers (2023-08-11T06:54:55Z) - DENTEX: An Abnormal Tooth Detection with Dental Enumeration and
Diagnosis Benchmark for Panoramic X-rays [0.3355353735901314]
The Dentalion and Diagnosis on Panoramic X-rays Challenge (DENTEX) has been organized in association with the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) in 2023.
We present the results of evaluating participant algorithms on the fully annotated data.
The provision of this annotated dataset, alongside the results of this challenge, may lay the groundwork for the creation of AI-powered tools in the field of dentistry.
arXiv Detail & Related papers (2023-05-30T15:15:50Z) - Self-Supervised Learning with Masked Image Modeling for Teeth Numbering,
Detection of Dental Restorations, and Instance Segmentation in Dental
Panoramic Radiographs [8.397847537464534]
This study aims to utilize recent self-supervised learning methods like SimMIM and UM-MAE to increase the model efficiency and understanding of the limited number of dental radiographs.
To the best of our knowledge, this is the first study that applied self-supervised learning methods to Swin Transformer on dental panoramic radiographs.
arXiv Detail & Related papers (2022-10-20T16:50:07Z) - Teeth3DS+: An Extended Benchmark for Intraoral 3D Scans Analysis [7.546387289692397]
This article introduces Teeth3DS+, the first comprehensive public benchmark designed to advance the field of intraoral 3D scan analysis.
The dataset includes at least 1,800 intraoral scans (containing 23,999 teeth) collected from 900 patients, covering both upper and lower jaws separately.
arXiv Detail & Related papers (2022-10-12T11:18:35Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and
Landmark Localization on 3D Intraoral Scans [56.55092443401416]
emphiMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.953pm0.076$, significantly outperforming the original MeshSegNet.
PointNet-Reg achieved a mean absolute error (MAE) of $0.623pm0.718, mm$ in distances between the prediction and ground truth for $44$ landmarks, which is superior compared with other networks for landmark detection.
arXiv Detail & Related papers (2021-09-24T13:00:26Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.