Enhanced Pediatric Dental Segmentation Using a Custom SegUNet with VGG19 Backbone on Panoramic Radiographs
- URL: http://arxiv.org/abs/2503.06321v1
- Date: Sat, 08 Mar 2025 19:32:25 GMT
- Title: Enhanced Pediatric Dental Segmentation Using a Custom SegUNet with VGG19 Backbone on Panoramic Radiographs
- Authors: Md Ohiduzzaman Ovi, Maliha Sanjana, Fahad Fahad, Mahjabin Runa, Zarin Tasnim Rothy, Tanmoy Sarkar Pias, A. M. Tayeful Islam, Rumman Ahmed Prodhan,
- Abstract summary: This study proposes a custom SegUNet model with a VGG19 backbone, designed explicitly for pediatric dental segmentation.<n>The model reached an accuracy of 97.53%, a dice coefficient of 92.49%, and an intersection over union (IOU) of 91.46%, setting a new benchmark for this dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pediatric dental segmentation is critical in dental diagnostics, presenting unique challenges due to variations in dental structures and the lower number of pediatric X-ray images. This study proposes a custom SegUNet model with a VGG19 backbone, designed explicitly for pediatric dental segmentation and applied to the Children's Dental Panoramic Radiographs dataset. The SegUNet architecture with a VGG19 backbone has been employed on this dataset for the first time, achieving state-of-the-art performance. The model reached an accuracy of 97.53%, a dice coefficient of 92.49%, and an intersection over union (IOU) of 91.46%, setting a new benchmark for this dataset. These results demonstrate the effectiveness of the VGG19 backbone in enhancing feature extraction and improving segmentation precision. Comprehensive evaluations across metrics, including precision, recall, and specificity, indicate the robustness of this approach. The model's ability to generalize across diverse dental structures makes it a valuable tool for clinical applications in pediatric dental care. It offers a reliable and efficient solution for automated dental diagnostics.
Related papers
- Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development [59.74920439478643]
In this paper, we collect and annotated the first benchmark dataset that covers diverse ERUS scenarios.
Our ERUS-10K dataset comprises 77 videos and 10,000 high-resolution annotated frames.
We introduce a benchmark model for colorectal cancer segmentation, named the Adaptive Sparse-context TRansformer (ASTR)
arXiv Detail & Related papers (2024-08-19T15:04:42Z) - Segmentation of Mental Foramen in Orthopantomographs: A Deep Learning Approach [1.9193578733126382]
This study aims to accelerate dental procedures, elevating patient care and healthcare efficiency in dentistry.
This research used Deep Learning methods to accurately detect and segment the Mental Foramen from panoramic radiograph images.
arXiv Detail & Related papers (2024-08-08T21:40:06Z) - Instance Segmentation and Teeth Classification in Panoramic X-rays [35.8246552579468]
Teeth segmentation and recognition are critical in various dental applications and dental diagnosis.
This article offers a pipeline of two deep learning models, U-Net and YOLOv8, which results in BB-UNet.
We have improved the quality and reliability of teeth segmentation by utilising the YOLOv8 and U-Net capabilities.
arXiv Detail & Related papers (2024-06-06T04:57:29Z) - TotalSegmentator MRI: Robust Sequence-independent Segmentation of Multiple Anatomic Structures in MRI [59.86827659781022]
A nnU-Net model (TotalSegmentator) was trained on MRI and segment 80atomic structures.<n>Dice scores were calculated between the predicted segmentations and expert reference standard segmentations to evaluate model performance.<n>Open-source, easy-to-use model allows for automatic, robust segmentation of 80 structures.
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Multiclass Segmentation using Teeth Attention Modules for Dental X-ray
Images [8.041659727964305]
We propose a novel teeth segmentation model incorporating an M-Net-like structure with Swin Transformers and TAB.
The proposed TAB utilizes a unique attention mechanism that focuses specifically on the complex structures of teeth.
The proposed architecture effectively captures local and global contextual information, accurately defining each tooth and its surrounding structures.
arXiv Detail & Related papers (2023-11-07T06:20:34Z) - A Deep Learning Approach to Teeth Segmentation and Orientation from
Panoramic X-rays [1.7366868394060984]
We present a comprehensive approach to teeth segmentation and orientation from panoramic X-ray images, leveraging deep learning techniques.
We build our model based on FUSegNet, a popular model originally developed for wound segmentation.
We introduce oriented bounding box (OBB) generation through principal component analysis (PCA) for precise tooth orientation estimation.
arXiv Detail & Related papers (2023-10-26T06:01:25Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - Construction of unbiased dental template and parametric dental model for
precision digital dentistry [46.459289444783956]
We develop an unbiased dental template by constructing an accurate dental atlas from CBCT images with guidance of teeth segmentation.
A total of 159 CBCT images of real subjects are collected to perform the constructions.
arXiv Detail & Related papers (2023-04-07T09:39:03Z) - CTooth+: A Large-scale Dental Cone Beam Computed Tomography Dataset and
Benchmark for Tooth Volume Segmentation [21.474631912695315]
Deep learning-based tooth segmentation methods have achieved satisfying performances but require a large quantity of tooth data with ground truth.
We establish a 3D dental CBCT dataset CTooth+, with 22 fully annotated volumes and 146 unlabeled volumes.
This work provides a new benchmark for the tooth volume segmentation task, and the experiment can serve as the baseline for future AI-based dental imaging research and clinical application development.
arXiv Detail & Related papers (2022-08-02T09:13:23Z) - OdontoAI: A human-in-the-loop labeled data set and an online platform to
boost research on dental panoramic radiographs [53.67409169790872]
This study addresses the construction of a public data set of dental panoramic radiographs.
We benefit from the human-in-the-loop (HITL) concept to expedite the labeling procedure.
Results demonstrate a 51% labeling time reduction using HITL, saving us more than 390 continuous working hours.
arXiv Detail & Related papers (2022-03-29T18:57:23Z) - Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and
Landmark Localization on 3D Intraoral Scans [56.55092443401416]
emphiMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.953pm0.076$, significantly outperforming the original MeshSegNet.
PointNet-Reg achieved a mean absolute error (MAE) of $0.623pm0.718, mm$ in distances between the prediction and ground truth for $44$ landmarks, which is superior compared with other networks for landmark detection.
arXiv Detail & Related papers (2021-09-24T13:00:26Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.