Dental CLAIRES: Contrastive LAnguage Image REtrieval Search for Dental
Research
- URL: http://arxiv.org/abs/2306.15651v1
- Date: Tue, 27 Jun 2023 17:47:12 GMT
- Title: Dental CLAIRES: Contrastive LAnguage Image REtrieval Search for Dental
Research
- Authors: Tanjida Kabir, Luyao Chen, Muhammad F Walji, Luca Giancardo, Xiaoqian
Jiang, Shayan Shams
- Abstract summary: The proposed framework, Contrastive LAnguage Image REtrieval Search for dental research, Dental CLAIRES, retrieves the best-matched images based on the text query.
Our model achieved a hit@3 ratio of 96% and a Mean Reciprocal Rank (MRR) of 0.82.
- Score: 6.628588447133907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning about diagnostic features and related clinical information from
dental radiographs is important for dental research. However, the lack of
expert-annotated data and convenient search tools poses challenges. Our primary
objective is to design a search tool that uses a user's query for oral-related
research. The proposed framework, Contrastive LAnguage Image REtrieval Search
for dental research, Dental CLAIRES, utilizes periapical radiographs and
associated clinical details such as periodontal diagnosis, demographic
information to retrieve the best-matched images based on the text query. We
applied a contrastive representation learning method to find images described
by the user's text by maximizing the similarity score of positive pairs (true
pairs) and minimizing the score of negative pairs (random pairs). Our model
achieved a hit@3 ratio of 96% and a Mean Reciprocal Rank (MRR) of 0.82. We also
designed a graphical user interface that allows researchers to verify the
model's performance with interactions.
Related papers
- Clinical Evaluation of Medical Image Synthesis: A Case Study in Wireless Capsule Endoscopy [63.39037092484374]
This study focuses on the clinical evaluation of medical Synthetic Data Generation using Artificial Intelligence (AI) models.
The paper contributes by a) presenting a protocol for the systematic evaluation of synthetic images by medical experts and b) applying it to assess TIDE-II, a novel variational autoencoder-based model for high-resolution WCE image synthesis.
The results show that TIDE-II generates clinically relevant WCE images, helping to address data scarcity and enhance diagnostic tools.
arXiv Detail & Related papers (2024-10-31T19:48:50Z) - Semi-supervised classification of dental conditions in panoramic radiographs using large language model and instance segmentation: A real-world dataset evaluation [6.041146512190833]
A semi-supervised learning framework is proposed to classify thirteen dental conditions on panoramic radiographs.
The solution demonstrated an accuracy level comparable to that of a junior specialist.
arXiv Detail & Related papers (2024-06-25T19:56:12Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Multiclass Segmentation using Teeth Attention Modules for Dental X-ray
Images [8.041659727964305]
We propose a novel teeth segmentation model incorporating an M-Net-like structure with Swin Transformers and TAB.
The proposed TAB utilizes a unique attention mechanism that focuses specifically on the complex structures of teeth.
The proposed architecture effectively captures local and global contextual information, accurately defining each tooth and its surrounding structures.
arXiv Detail & Related papers (2023-11-07T06:20:34Z) - AI-Dentify: Deep learning for proximal caries detection on bitewing x-ray -- HUNT4 Oral Health Study [0.0]
The use of artificial intelligence has the potential to aid in the diagnosis by providing a quick and informative analysis of the bitewing images.
A dataset of 13,887 bitewings from the HUNT4 Oral Health Study was annotated individually by six different experts.
A consensus dataset of 197 images, annotated jointly by the same six dentist, was used for evaluation.
arXiv Detail & Related papers (2023-09-30T12:17:36Z) - Generative Adversarial Networks for Dental Patient Identity Protection
in Orthodontic Educational Imaging [0.0]
This research introduces a novel area-preserving Generative Adversarial Networks (GAN) inversion technique for effectively de-identifying dental patient images.
This innovative method addresses privacy concerns while preserving key dental features, thereby generating valuable resources for dental education and research.
arXiv Detail & Related papers (2023-07-05T04:14:57Z) - Construction of unbiased dental template and parametric dental model for
precision digital dentistry [46.459289444783956]
We develop an unbiased dental template by constructing an accurate dental atlas from CBCT images with guidance of teeth segmentation.
A total of 159 CBCT images of real subjects are collected to perform the constructions.
arXiv Detail & Related papers (2023-04-07T09:39:03Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - OdontoAI: A human-in-the-loop labeled data set and an online platform to
boost research on dental panoramic radiographs [53.67409169790872]
This study addresses the construction of a public data set of dental panoramic radiographs.
We benefit from the human-in-the-loop (HITL) concept to expedite the labeling procedure.
Results demonstrate a 51% labeling time reduction using HITL, saving us more than 390 continuous working hours.
arXiv Detail & Related papers (2022-03-29T18:57:23Z) - PaXNet: Dental Caries Detection in Panoramic X-ray using Ensemble
Transfer Learning and Capsule Classifier [8.164433158925593]
In many cases, dental caries is hard to identify using x-rays due to different reasons such as low image quality.
Here, we propose an automatic diagnosis system to detect dental caries in Panoramic images for the first time.
The proposed model benefits from various pretrained deep learning models through transfer learning to extract relevant features from x-rays and uses a capsule network to draw prediction results.
arXiv Detail & Related papers (2020-12-26T03:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.