CEPHA29: Automatic Cephalometric Landmark Detection Challenge 2023
- URL: http://arxiv.org/abs/2212.04808v2
- Date: Mon, 3 Apr 2023 10:27:21 GMT
- Title: CEPHA29: Automatic Cephalometric Landmark Detection Challenge 2023
- Authors: Muhammad Anwaar Khalid, Kanwal Zulfiqar, Ulfat Bashir, Areeba Shaheen,
Rida Iqbal, Zarnab Rizwan, Ghina Rizwan, Muhammad Moazam Fraz
- Abstract summary: We organise the CEPHA29 Automatic Cephalometric Landmark Detection Challenge.
We provide the largest known publicly available dataset, consisting of 1000 cephalometric X-ray images.
We hope that our challenge will signal the beginning of a new era in the discipline.
- Score: 0.402058998065435
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Quantitative cephalometric analysis is the most widely used clinical and
research tool in modern orthodontics. Accurate localization of cephalometric
landmarks enables the quantification and classification of anatomical
abnormalities, however, the traditional manual way of marking these landmarks
is a very tedious job. Endeavours have constantly been made to develop
automated cephalometric landmark detection systems but they are inadequate for
orthodontic applications. The fundamental reason for this is that the amount of
publicly available datasets as well as the images provided for training in
these datasets are insufficient for an AI model to perform well. To facilitate
the development of robust AI solutions for morphometric analysis, we organise
the CEPHA29 Automatic Cephalometric Landmark Detection Challenge in conjunction
with IEEE International Symposium on Biomedical Imaging (ISBI 2023). In this
context, we provide the largest known publicly available dataset, consisting of
1000 cephalometric X-ray images. We hope that our challenge will not only
derive forward research and innovation in automatic cephalometric landmark
identification but will also signal the beginning of a new era in the
discipline.
Related papers
- Benchmarking Pretrained Attention-based Models for Real-Time Recognition in Robot-Assisted Esophagectomy [2.847280871973632]
Esophageal cancer is among the most common types of cancer worldwide.
In recent years, robot-assisted minimally invasive esophagectomy has emerged as a promising alternative.
Computer-aided anatomy recognition holds promise for improving surgical navigation.
arXiv Detail & Related papers (2024-12-04T15:32:37Z) - Deep Learning Techniques for Automatic Lateral X-ray Cephalometric Landmark Detection: Is the Problem Solved? [12.422216286751073]
"Cephalometric Landmark Detection (CL-Detection)" dataset is the largest publicly available and comprehensive dataset for cephalometric landmark detection.
This paper measures how far state-of-the-art deep learning methods can go for cephalometric landmark detection.
arXiv Detail & Related papers (2024-09-24T08:03:13Z) - A Robust Ensemble Algorithm for Ischemic Stroke Lesion Segmentation: Generalizability and Clinical Utility Beyond the ISLES Challenge [30.611482996378683]
Image and disease variability hinder the development of generalizable AI algorithms with clinical value.
We present a novel ensemble algorithm derived from the 2022 Ischemic Stroke Lesion (ISLES) challenge.
We combined top-performing algorithms into an ensemble model that overcomes the limitations of individual solutions.
arXiv Detail & Related papers (2024-03-28T13:56:26Z) - Leveraging Foundation Models for Content-Based Medical Image Retrieval in Radiology [0.14631663747888957]
Content-based image retrieval has the potential to significantly improve diagnostic aid and medical research in radiology.
Current CBIR systems face limitations due to their specialization to certain pathologies, limiting their utility.
We propose using vision foundation models as powerful and versatile off-the-shelf feature extractors for content-based medical image retrieval.
arXiv Detail & Related papers (2024-03-11T10:06:45Z) - Revisiting Computer-Aided Tuberculosis Diagnosis [56.80999479735375]
Tuberculosis (TB) is a major global health threat, causing millions of deaths annually.
Computer-aided tuberculosis diagnosis (CTD) using deep learning has shown promise, but progress is hindered by limited training data.
We establish a large-scale dataset, namely the Tuberculosis X-ray (TBX11K) dataset, which contains 11,200 chest X-ray (CXR) images with corresponding bounding box annotations for TB areas.
This dataset enables the training of sophisticated detectors for high-quality CTD.
arXiv Detail & Related papers (2023-07-06T08:27:48Z) - 'Aariz: A Benchmark Dataset for Automatic Cephalometric Landmark
Detection and CVM Stage Classification [0.402058998065435]
This dataset includes 1000 lateral cephalometric radiographs (LCRs) obtained from 7 different radiographic imaging devices with varying resolutions.
The clinical experts of our team meticulously annotated each radiograph with 29 cephalometric landmarks, including the most significant soft tissue landmarks ever marked in any publicly available dataset.
We believe that this dataset will be instrumental in the development of reliable automated landmark detection frameworks for use in orthodontics and beyond.
arXiv Detail & Related papers (2023-02-15T17:31:56Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.