Instance Segmentation and Teeth Classification in Panoramic X-rays
- URL: http://arxiv.org/abs/2406.03747v1
- Date: Thu, 6 Jun 2024 04:57:29 GMT
- Title: Instance Segmentation and Teeth Classification in Panoramic X-rays
- Authors: Devichand Budagam, Ayush Kumar, Sayan Ghosh, Anuj Shrivastav, Azamat Zhanatuly Imanbayev, Iskander Rafailovich Akhmetov, Dmitrii Kaplun, Sergey Antonov, Artem Rychenkov, Gleb Cyganov, Aleksandr Sinitca,
- Abstract summary: Teeth segmentation and recognition are critical in various dental applications and dental diagnosis.
This article offers a pipeline of two deep learning models, U-Net and YOLOv8, which results in BB-UNet.
We have improved the quality and reliability of teeth segmentation by utilising the YOLOv8 and U-Net capabilities.
- Score: 35.8246552579468
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Teeth segmentation and recognition are critical in various dental applications and dental diagnosis. Automatic and accurate segmentation approaches have been made possible by integrating deep learning models. Although teeth segmentation has been studied in the past, only some techniques were able to effectively classify and segment teeth simultaneously. This article offers a pipeline of two deep learning models, U-Net and YOLOv8, which results in BB-UNet, a new architecture for the classification and segmentation of teeth on panoramic X-rays that is efficient and reliable. We have improved the quality and reliability of teeth segmentation by utilising the YOLOv8 and U-Net capabilities. The proposed networks have been evaluated using the mean average precision (mAP) and dice coefficient for YOLOv8 and BB-UNet, respectively. We have achieved a 3\% increase in mAP score for teeth classification compared to existing methods, and a 10-15\% increase in dice coefficient for teeth segmentation compared to U-Net across different categories of teeth. A new Dental dataset was created based on UFBA-UESC dataset with Bounding-Box and Polygon annotations of 425 dental panoramic X-rays. The findings of this research pave the way for a wider adoption of object detection models in the field of dental diagnosis.
Related papers
- TSegFormer: 3D Tooth Segmentation in Intraoral Scans with Geometry
Guided Transformer [47.18526074157094]
Optical Intraoral Scanners (IOSs) are widely used in digital dentistry to provide detailed 3D information of dental crowns and the gingiva.
Previous methods are error-prone at complicated boundaries and exhibit unsatisfactory results across patients.
We propose TSegFormer which captures both local and global dependencies among different teeth and the gingiva in the IOS point clouds with a multi-task 3D transformer architecture.
arXiv Detail & Related papers (2023-11-22T08:45:01Z) - Multiclass Segmentation using Teeth Attention Modules for Dental X-ray
Images [8.041659727964305]
We propose a novel teeth segmentation model incorporating an M-Net-like structure with Swin Transformers and TAB.
The proposed TAB utilizes a unique attention mechanism that focuses specifically on the complex structures of teeth.
The proposed architecture effectively captures local and global contextual information, accurately defining each tooth and its surrounding structures.
arXiv Detail & Related papers (2023-11-07T06:20:34Z) - A Deep Learning Approach to Teeth Segmentation and Orientation from
Panoramic X-rays [1.7366868394060984]
We present a comprehensive approach to teeth segmentation and orientation from panoramic X-ray images, leveraging deep learning techniques.
We build our model based on FUSegNet, a popular model originally developed for wound segmentation.
We introduce oriented bounding box (OBB) generation through principal component analysis (PCA) for precise tooth orientation estimation.
arXiv Detail & Related papers (2023-10-26T06:01:25Z) - Construction of unbiased dental template and parametric dental model for
precision digital dentistry [46.459289444783956]
We develop an unbiased dental template by constructing an accurate dental atlas from CBCT images with guidance of teeth segmentation.
A total of 159 CBCT images of real subjects are collected to perform the constructions.
arXiv Detail & Related papers (2023-04-07T09:39:03Z) - An Implicit Parametric Morphable Dental Model [79.29420177904022]
We present the first parametric 3D morphable dental model for both teeth and gum.
It is based on a component-wise representation for each tooth and the gum, together with a learnable latent code for each of such components.
Our reconstruction quality is on par with the most advanced global implicit representations while enabling novel applications.
arXiv Detail & Related papers (2022-11-21T12:23:54Z) - CTooth+: A Large-scale Dental Cone Beam Computed Tomography Dataset and
Benchmark for Tooth Volume Segmentation [21.474631912695315]
Deep learning-based tooth segmentation methods have achieved satisfying performances but require a large quantity of tooth data with ground truth.
We establish a 3D dental CBCT dataset CTooth+, with 22 fully annotated volumes and 146 unlabeled volumes.
This work provides a new benchmark for the tooth volume segmentation task, and the experiment can serve as the baseline for future AI-based dental imaging research and clinical application development.
arXiv Detail & Related papers (2022-08-02T09:13:23Z) - CTooth: A Fully Annotated 3D Dataset and Benchmark for Tooth Volume
Segmentation on Cone Beam Computed Tomography Images [19.79983193894742]
3D tooth segmentation is a prerequisite for computer-aided dental diagnosis and treatment.
Deep learning-based segmentation methods produce convincing results, but it requires a large quantity of ground truth for training.
In this paper, we establish a fully annotated cone beam computed tomography dataset CTooth with tooth gold standard.
arXiv Detail & Related papers (2022-06-17T13:48:35Z) - Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and
Landmark Localization on 3D Intraoral Scans [56.55092443401416]
emphiMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.953pm0.076$, significantly outperforming the original MeshSegNet.
PointNet-Reg achieved a mean absolute error (MAE) of $0.623pm0.718, mm$ in distances between the prediction and ground truth for $44$ landmarks, which is superior compared with other networks for landmark detection.
arXiv Detail & Related papers (2021-09-24T13:00:26Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.