An End-to-end Entangled Segmentation and Classification Convolutional
Neural Network for Periodontitis Stage Grading from Periapical Radiographic
Images
- URL: http://arxiv.org/abs/2109.13120v1
- Date: Mon, 27 Sep 2021 15:28:54 GMT
- Title: An End-to-end Entangled Segmentation and Classification Convolutional
Neural Network for Periodontitis Stage Grading from Periapical Radiographic
Images
- Authors: Tanjida Kabir, Chun-Teh Lee, Jiman Nelson, Sally Sheng, Hsiu-Wan Meng,
Luyao Chen, Muhammad F Walji, Xioaqian Jiang, and Shayan Shams
- Abstract summary: We developed an end-to-end deep learning network HYNETS for grading periodontitis from periapical X-rays.
HyNETS combines a set of segmentation networks and a classification network to provide an end-to-end interpretable solution.
- Score: 0.013002050979054347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Periodontitis is a biofilm-related chronic inflammatory disease characterized
by gingivitis and bone loss in the teeth area. Approximately 61 million adults
over 30 suffer from periodontitis (42.2%), with 7.8% having severe
periodontitis in the United States. The measurement of radiographic bone loss
(RBL) is necessary to make a correct periodontal diagnosis, especially if the
comprehensive and longitudinal periodontal mapping is unavailable. However,
doctors can interpret X-rays differently depending on their experience and
knowledge. Computerized diagnosis support for doctors sheds light on making the
diagnosis with high accuracy and consistency and drawing up an appropriate
treatment plan for preventing or controlling periodontitis. We developed an
end-to-end deep learning network HYNETS (Hybrid NETwork for pEriodoNTiTiS
STagES from radiograpH) by integrating segmentation and classification tasks
for grading periodontitis from periapical radiographic images. HYNETS leverages
a multi-task learning strategy by combining a set of segmentation networks and
a classification network to provide an end-to-end interpretable solution and
highly accurate and consistent results. HYNETS achieved the average dice
coefficient of 0.96 and 0.94 for the bone area and tooth segmentation and the
average AUC of 0.97 for periodontitis stage assignment. Additionally,
conventional image processing techniques provide RBL measurements and build
transparency and trust in the model's prediction. HYNETS will potentially
transform clinical diagnosis from a manual time-consuming, and error-prone task
to an efficient and automated periodontitis stage assignment based on
periapical radiographic images.
Related papers
- PerioDet: Large-Scale Panoramic Radiograph Benchmark for Clinical-Oriented Apical Periodontitis Detection [7.791916637642707]
Apical periodontitis is a prevalent oral pathology that presents significant public health challenges.<n>Despite advances in automated diagnostic systems, the development ofCAD applications for apical periodontitis is still constrained by the lack of a large-scale, high-quality annotated dataset.<n>We release a large-scale panoramic radiograph benchmark called "PerioXrays", comprising 3,673 images and 5,662 meticulously annotated instances of apical periodontitis.
arXiv Detail & Related papers (2025-07-25T04:53:09Z) - AI-assisted radiographic analysis in detecting alveolar bone-loss severity and patterns [0.3767121007961969]
We propose a novel AI-based deep learning framework to automatically detect and quantify alveolar bone loss.<n>Our method combines YOLOv8 for tooth detection with Keypoint R-CNN models to identify anatomical landmarks.<n>YOLOv8x-seg models segment bone levels and tooth masks to determine bone loss patterns.
arXiv Detail & Related papers (2025-06-25T15:08:52Z) - Periodontal Bone Loss Analysis via Keypoint Detection With Heuristic Post-Processing [10.628754886688846]
This study evaluates the application of a deep learning keypoint and object detection model, YOLOv8-pose, for the automatic identification of localised bone loss landmarks.
YOLOv8-pose was fine-tuned on 193 annotated periapical radiographs.
We propose a keypoint detection metric, Percentage of Relative Correct Keypoints (PRCK), which normalises the metric to the average tooth size of teeth in the image.
arXiv Detail & Related papers (2025-03-05T00:34:29Z) - Semi-supervised classification of dental conditions in panoramic radiographs using large language model and instance segmentation: A real-world dataset evaluation [6.041146512190833]
A semi-supervised learning framework is proposed to classify thirteen dental conditions on panoramic radiographs.
The solution demonstrated an accuracy level comparable to that of a junior specialist.
arXiv Detail & Related papers (2024-06-25T19:56:12Z) - Teeth Localization and Lesion Segmentation in CBCT Images using
SpatialConfiguration-Net and U-Net [0.4915744683251149]
The localization of teeth and segmentation of periapical lesions are crucial tasks for clinical diagnosis and treatment planning.
In this study, we propose a deep learning-based method utilizing two convolutional neural networks.
The method achieves a 97.3% accuracy for teeth localization, along with a promising sensitivity and specificity of 0.97 and 0.88, respectively, for subsequent lesion detection.
arXiv Detail & Related papers (2023-12-19T14:23:47Z) - Learning to diagnose cirrhosis from radiological and histological labels
with joint self and weakly-supervised pretraining strategies [62.840338941861134]
We propose to leverage transfer learning from large datasets annotated by radiologists, to predict the histological score available on a small annex dataset.
We compare different pretraining methods, namely weakly-supervised and self-supervised ones, to improve the prediction of the cirrhosis.
This method outperforms the baseline classification of the METAVIR score, reaching an AUC of 0.84 and a balanced accuracy of 0.75.
arXiv Detail & Related papers (2023-02-16T17:06:23Z) - Self-Supervised Learning with Masked Image Modeling for Teeth Numbering,
Detection of Dental Restorations, and Instance Segmentation in Dental
Panoramic Radiographs [8.397847537464534]
This study aims to utilize recent self-supervised learning methods like SimMIM and UM-MAE to increase the model efficiency and understanding of the limited number of dental radiographs.
To the best of our knowledge, this is the first study that applied self-supervised learning methods to Swin Transformer on dental panoramic radiographs.
arXiv Detail & Related papers (2022-10-20T16:50:07Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Calibrate the inter-observer segmentation uncertainty via
diagnosis-first principle [45.29954184893812]
We propose diagnosis-first principle, which is to take disease diagnosis as the criterion to calibrate the inter-observer segmentation uncertainty.
We dubbed the fused ground-truth as Diagnosis First Ground-truth (DF-GT).Then, we further propose Take and Give Modelto segment DF-GT from the raw image.
Experimental results show that the proposed DiFF is able to significantly facilitate the corresponding disease diagnosis.
arXiv Detail & Related papers (2022-08-05T07:12:24Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - Use of the Deep Learning Approach to Measure Alveolar Bone Level [4.92694463351569]
The goal was to use a Deep Convolutional Neural Network to measure the radiographic alveolar bone level to aid periodontal diagnosis.
A Deep Learning (DL) model was developed by integrating three segmentation networks (bone area, tooth, cementoenamel junction) and image analysis.
arXiv Detail & Related papers (2021-09-24T17:48:27Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Diagnosis of Coronavirus Disease 2019 (COVID-19) with Structured Latent
Multi-View Representation Learning [48.05232274463484]
Recently, the outbreak of Coronavirus Disease 2019 (COVID-19) has spread rapidly across the world.
Due to the large number of affected patients and heavy labor for doctors, computer-aided diagnosis with machine learning algorithm is urgently needed.
In this study, we propose to conduct the diagnosis of COVID-19 with a series of features extracted from CT images.
arXiv Detail & Related papers (2020-05-06T15:19:15Z) - An Adaptive Enhancement Based Hybrid CNN Model for Digital Dental X-ray
Positions Classification [1.0672152844970149]
A novel solution based on adaptive histogram equalization and convolution neural network (CNN) is proposed.
The accuracy and specificity of the test set exceeded 90%, and the AUC reached 0.97.
arXiv Detail & Related papers (2020-05-01T13:55:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.