Radious: Unveiling the Enigma of Dental Radiology with BEIT Adaptor and
Mask2Former in Semantic Segmentation
- URL: http://arxiv.org/abs/2305.06236v1
- Date: Wed, 10 May 2023 15:15:09 GMT
- Title: Radious: Unveiling the Enigma of Dental Radiology with BEIT Adaptor and
Mask2Former in Semantic Segmentation
- Authors: Mohammad Mashayekhi, Sara Ahmadi Majd, Arian Amiramjadi, Babak
Mashayekhi
- Abstract summary: We developed a semantic segmentation algorithm based on BEIT adaptor and Mask2Former to detect and identify teeth, roots, and multiple dental diseases.
We compared the result of our algorithm to two state-of-the-art algorithms in image segmentation named: Deeplabv3 and Segformer.
We discovered that Radious outperformed those algorithms by increasing the mIoU scores by 9% and 33% in Deeplabv3+ and Segformer, respectively.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: X-ray images are the first steps for diagnosing and further treating dental
problems. So, early diagnosis prevents the development and increase of oral and
dental diseases. In this paper, we developed a semantic segmentation algorithm
based on BEIT adaptor and Mask2Former to detect and identify teeth, roots, and
multiple dental diseases and abnormalities such as pulp chamber, restoration,
endodontics, crown, decay, pin, composite, bridge, pulpitis, orthodontics,
radicular cyst, periapical cyst, cyst, implant, and bone graft material in
panoramic, periapical, and bitewing X-ray images. We compared the result of our
algorithm to two state-of-the-art algorithms in image segmentation named:
Deeplabv3 and Segformer on our own data set. We discovered that Radious
outperformed those algorithms by increasing the mIoU scores by 9% and 33% in
Deeplabv3+ and Segformer, respectively.
Related papers
- PX2Tooth: Reconstructing the 3D Point Cloud Teeth from a Single Panoramic X-ray [20.913080797758816]
We propose PX2Tooth, a novel approach to reconstruct 3D teeth using a single PX image with a two-stage framework.
First, we design the PXSegNet to segment the permanent teeth from the PX images, providing clear positional, morphological, and categorical information for each tooth.
Subsequently, we design a novel tooth generation network (TGNet) that learns to transform random point clouds into 3D teeth.
arXiv Detail & Related papers (2024-11-06T07:44:04Z) - Teeth-SEG: An Efficient Instance Segmentation Framework for Orthodontic Treatment based on Anthropic Prior Knowledge [8.87268139736394]
We propose a ViT-based framework named TeethSEG, which consists of stacked Multi-Scale Aggregation (MSA) blocks and an Anthropic Prior Knowledge (APK) layer.
To address these problems, we propose a ViT-based framework named TeethSEG, which consists of stacked Multi-Scale Aggregation (MSA) blocks and an Anthropic Prior Knowledge (APK) layer.
Experiments on IO150K demonstrate that our TeethSEG outperforms the state-of-the-art segmentation models on dental image segmentation.
arXiv Detail & Related papers (2024-04-01T09:34:51Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Developing a Novel Approach for Periapical Dental Radiographs
Segmentation [1.332560004325655]
The proposed algorithm is made of two stages. The first stage is pre-processing.
The second and main part of this algorithm calculated rotation degree and uses the integral projection method for tooth isolation.
Experimental results show that this algorithm is robust and achieves high accuracy.
arXiv Detail & Related papers (2021-11-13T17:25:35Z) - Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and
Landmark Localization on 3D Intraoral Scans [56.55092443401416]
emphiMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.953pm0.076$, significantly outperforming the original MeshSegNet.
PointNet-Reg achieved a mean absolute error (MAE) of $0.623pm0.718, mm$ in distances between the prediction and ground truth for $44$ landmarks, which is superior compared with other networks for landmark detection.
arXiv Detail & Related papers (2021-09-24T13:00:26Z) - A fully automated method for 3D individual tooth identification and
segmentation in dental CBCT [1.567576360103422]
This paper proposes a fully automated method of identifying and segmenting 3D individual teeth from dental CBCT images.
The proposed method addresses the aforementioned difficulty by developing a deep learning-based hierarchical multi-step model.
Experimental results showed that the proposed method achieved an F1-score of 93.35% for tooth identification and a Dice similarity coefficient of 94.79% for individual 3D tooth segmentation.
arXiv Detail & Related papers (2021-02-11T15:07:23Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z) - Pose-Aware Instance Segmentation Framework from Cone Beam CT Images for
Tooth Segmentation [9.880428545498662]
Individual tooth segmentation from cone beam computed tomography (CBCT) images is essential for an anatomical understanding of orthodontic structures.
The presence of severe metal artifacts in CBCT images hinders the accurate segmentation of each individual tooth.
We propose a neural network for pixel-wise labeling to exploit an instance segmentation framework that is robust to metal artifacts.
arXiv Detail & Related papers (2020-02-06T07:57:34Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.