OralBBNet: Spatially Guided Dental Segmentation of Panoramic X-Rays with Bounding Box Priors
- URL: http://arxiv.org/abs/2406.03747v3
- Date: Wed, 02 Jul 2025 02:11:49 GMT
- Title: OralBBNet: Spatially Guided Dental Segmentation of Panoramic X-Rays with Bounding Box Priors
- Authors: Devichand Budagam, Azamat Zhanatuly Imanbayev, Iskander Rafailovich Akhmetov, Aleksandr Sinitca, Sergey Antonov, Dmitrii Kaplun,
- Abstract summary: OralBBNet is designed to improve the accuracy and robustness of tooth classification and segmentation on panoramic X-rays.<n>Our approach achieved a 1-3% improvement in mean average precision (mAP) for tooth detection compared to existing techniques.<n>Results of this study establish a foundation for the wider implementation of object detection models in dental diagnostics.
- Score: 34.82692226532414
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Teeth segmentation and recognition play a vital role in a variety of dental applications and diagnostic procedures. The integration of deep learning models has facilitated the development of precise and automated segmentation methods. Although prior research has explored teeth segmentation, not many methods have successfully performed tooth segmentation and detection simultaneously. This study presents UFBA-425, a dental dataset derived from the UFBA-UESC dataset, featuring bounding box and polygon annotations for 425 panoramic dental X-rays. In addition, this paper presents the OralBBNet architecture, which is based on the best segmentation and detection qualities of architectures such as U-Net and YOLOv8, respectively. OralBBNet is designed to improve the accuracy and robustness of tooth classification and segmentation on panoramic X-rays by leveraging the complementary strengths of U-Net and YOLOv8. Our approach achieved a 1-3% improvement in mean average precision (mAP) for tooth detection compared to existing techniques and a 15-20% improvement in the dice score for teeth segmentation over state-of-the-art (SOTA) solutions for various tooth categories and 2-4% improvement in the dice score compared to other SOTA segmentation architectures. The results of this study establish a foundation for the wider implementation of object detection models in dental diagnostics.
Related papers
- GeoT: Geometry-guided Instance-dependent Transition Matrix for Semi-supervised Tooth Point Cloud Segmentation [48.64133802117796]
GeoT is a framework that employs instance-dependent transition matrix (IDTM) to explicitly model noise in pseudo labels for semi-supervised dental segmentation.
Specifically, to handle the extensive solution space of IDTM arising from tens of thousands of dental points, we introduce tooth geometric priors.
Our method can make full utilization of unlabeled data to facilitate segmentation, achieving performance comparable to fully supervised methods with only $20%$ of the labeled data.
arXiv Detail & Related papers (2025-03-21T09:43:57Z) - Enhanced Pediatric Dental Segmentation Using a Custom SegUNet with VGG19 Backbone on Panoramic Radiographs [0.0]
This study proposes a custom SegUNet model with a VGG19 backbone, designed explicitly for pediatric dental segmentation.<n>The model reached an accuracy of 97.53%, a dice coefficient of 92.49%, and an intersection over union (IOU) of 91.46%, setting a new benchmark for this dataset.
arXiv Detail & Related papers (2025-03-08T19:32:25Z) - Multi-Class Segmentation of Aortic Branches and Zones in Computed Tomography Angiography: The AortaSeg24 Challenge [55.252714550918824]
AortaSeg24 MICCAI Challenge introduced the first dataset of 100 CTA volumes annotated for 23 clinically relevant aortic branches and zones.<n>This paper presents the challenge design, dataset details, evaluation metrics, and an in-depth analysis of the top-performing algorithms.
arXiv Detail & Related papers (2025-02-07T21:09:05Z) - TSegFormer: 3D Tooth Segmentation in Intraoral Scans with Geometry
Guided Transformer [47.18526074157094]
Optical Intraoral Scanners (IOSs) are widely used in digital dentistry to provide detailed 3D information of dental crowns and the gingiva.
Previous methods are error-prone at complicated boundaries and exhibit unsatisfactory results across patients.
We propose TSegFormer which captures both local and global dependencies among different teeth and the gingiva in the IOS point clouds with a multi-task 3D transformer architecture.
arXiv Detail & Related papers (2023-11-22T08:45:01Z) - Multiclass Segmentation using Teeth Attention Modules for Dental X-ray
Images [8.041659727964305]
We propose a novel teeth segmentation model incorporating an M-Net-like structure with Swin Transformers and TAB.
The proposed TAB utilizes a unique attention mechanism that focuses specifically on the complex structures of teeth.
The proposed architecture effectively captures local and global contextual information, accurately defining each tooth and its surrounding structures.
arXiv Detail & Related papers (2023-11-07T06:20:34Z) - A Deep Learning Approach to Teeth Segmentation and Orientation from
Panoramic X-rays [1.7366868394060984]
We present a comprehensive approach to teeth segmentation and orientation from panoramic X-ray images, leveraging deep learning techniques.
We build our model based on FUSegNet, a popular model originally developed for wound segmentation.
We introduce oriented bounding box (OBB) generation through principal component analysis (PCA) for precise tooth orientation estimation.
arXiv Detail & Related papers (2023-10-26T06:01:25Z) - Construction of unbiased dental template and parametric dental model for
precision digital dentistry [46.459289444783956]
We develop an unbiased dental template by constructing an accurate dental atlas from CBCT images with guidance of teeth segmentation.
A total of 159 CBCT images of real subjects are collected to perform the constructions.
arXiv Detail & Related papers (2023-04-07T09:39:03Z) - An Implicit Parametric Morphable Dental Model [79.29420177904022]
We present the first parametric 3D morphable dental model for both teeth and gum.
It is based on a component-wise representation for each tooth and the gum, together with a learnable latent code for each of such components.
Our reconstruction quality is on par with the most advanced global implicit representations while enabling novel applications.
arXiv Detail & Related papers (2022-11-21T12:23:54Z) - TFormer: 3D Tooth Segmentation in Mesh Scans with Geometry Guided
Transformer [37.47317212620463]
Optical Intra-oral Scanners (IOS) are widely used in digital dentistry, providing 3-Dimensional (3D) and high-resolution geometrical information of dental crowns and the gingiva.
Previous methods are error-prone in complicated tooth-tooth or tooth-gingiva boundaries, and usually exhibit unsatisfactory results across various patients.
We propose a novel method based on 3D transformer architectures that is evaluated with large-scale and high-resolution 3D IOS datasets.
arXiv Detail & Related papers (2022-10-29T15:20:54Z) - CTooth+: A Large-scale Dental Cone Beam Computed Tomography Dataset and
Benchmark for Tooth Volume Segmentation [21.474631912695315]
Deep learning-based tooth segmentation methods have achieved satisfying performances but require a large quantity of tooth data with ground truth.
We establish a 3D dental CBCT dataset CTooth+, with 22 fully annotated volumes and 146 unlabeled volumes.
This work provides a new benchmark for the tooth volume segmentation task, and the experiment can serve as the baseline for future AI-based dental imaging research and clinical application development.
arXiv Detail & Related papers (2022-08-02T09:13:23Z) - CTooth: A Fully Annotated 3D Dataset and Benchmark for Tooth Volume
Segmentation on Cone Beam Computed Tomography Images [19.79983193894742]
3D tooth segmentation is a prerequisite for computer-aided dental diagnosis and treatment.
Deep learning-based segmentation methods produce convincing results, but it requires a large quantity of ground truth for training.
In this paper, we establish a fully annotated cone beam computed tomography dataset CTooth with tooth gold standard.
arXiv Detail & Related papers (2022-06-17T13:48:35Z) - Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and
Landmark Localization on 3D Intraoral Scans [56.55092443401416]
emphiMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.953pm0.076$, significantly outperforming the original MeshSegNet.
PointNet-Reg achieved a mean absolute error (MAE) of $0.623pm0.718, mm$ in distances between the prediction and ground truth for $44$ landmarks, which is superior compared with other networks for landmark detection.
arXiv Detail & Related papers (2021-09-24T13:00:26Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.