ExplAIn: Explanatory Artificial Intelligence for Diabetic Retinopathy
Diagnosis
- URL: http://arxiv.org/abs/2008.05731v3
- Date: Thu, 22 Jul 2021 12:16:08 GMT
- Title: ExplAIn: Explanatory Artificial Intelligence for Diabetic Retinopathy
Diagnosis
- Authors: Gwenol\'e Quellec, Hassan Al Hajj, Mathieu Lamard, Pierre-Henri Conze,
Pascale Massin, B\'eatrice Cochener
- Abstract summary: We describe an eXplanatory Artificial Intelligence (XAI) that reaches the same level of performance as black-box AI.
This algorithm, called ExplAIn, learns to segment and categorize lesions in images.
We expect this new framework, which jointly offers high classification performance and explainability, to facilitate AI deployment.
- Score: 0.46137254657294535
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, Artificial Intelligence (AI) has proven its relevance for
medical decision support. However, the "black-box" nature of successful AI
algorithms still holds back their wide-spread deployment. In this paper, we
describe an eXplanatory Artificial Intelligence (XAI) that reaches the same
level of performance as black-box AI, for the task of classifying Diabetic
Retinopathy (DR) severity using Color Fundus Photography (CFP). This algorithm,
called ExplAIn, learns to segment and categorize lesions in images; the final
image-level classification directly derives from these multivariate lesion
segmentations. The novelty of this explanatory framework is that it is trained
from end to end, with image supervision only, just like black-box AI
algorithms: the concepts of lesions and lesion categories emerge by themselves.
For improved lesion localization, foreground/background separation is trained
through self-supervision, in such a way that occluding foreground pixels
transforms the input image into a healthy-looking image. The advantage of such
an architecture is that automatic diagnoses can be explained simply by an image
and/or a few sentences. ExplAIn is evaluated at the image level and at the
pixel level on various CFP image datasets. We expect this new framework, which
jointly offers high classification performance and explainability, to
facilitate AI deployment.
Related papers
- Coupling AI and Citizen Science in Creation of Enhanced Training Dataset for Medical Image Segmentation [3.7274206780843477]
We introduce a robust and versatile framework that combines AI and crowdsourcing to improve the quality and quantity of medical image datasets.
Our approach utilise a user-friendly online platform that enables a diverse group of crowd annotators to label medical images efficiently.
We employ pix2pixGAN, a generative AI model, to expand the training dataset with synthetic images that capture realistic morphological features.
arXiv Detail & Related papers (2024-09-04T21:22:54Z) - A Sanity Check for AI-generated Image Detection [49.08585395873425]
We present a sanity check on whether the task of AI-generated image detection has been solved.
To quantify the generalization of existing methods, we evaluate 9 off-the-shelf AI-generated image detectors on Chameleon dataset.
We propose AIDE (AI-generated Image DEtector with Hybrid Features), which leverages multiple experts to simultaneously extract visual artifacts and noise patterns.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Latent Diffusion Models with Image-Derived Annotations for Enhanced
AI-Assisted Cancer Diagnosis in Histopathology [0.0]
This work proposes a method that constructs structured textual prompts from automatically extracted image features.
We show that including image-derived features in the prompt, as opposed to only healthy and cancerous labels, improves the Fr'echet Inception Distance (FID) from 178.8 to 90.2.
We also show that pathologists find it challenging to detect synthetic images, with a median sensitivity/specificity of 0.55/0.55.
arXiv Detail & Related papers (2023-12-15T13:48:55Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - CIFAKE: Image Classification and Explainable Identification of
AI-Generated Synthetic Images [7.868449549351487]
This article proposes to enhance our ability to recognise AI-generated images through computer vision.
The two sets of data present as a binary classification problem with regard to whether the photograph is real or generated by AI.
This study proposes the use of a Convolutional Neural Network (CNN) to classify the images into two categories; Real or Fake.
arXiv Detail & Related papers (2023-03-24T16:33:06Z) - Diagnosis of Paratuberculosis in Histopathological Images Based on
Explainable Artificial Intelligence and Deep Learning [0.0]
This study examines a new and original dataset using the deep learning algorithm, and visualizes the output with gradient-weighted class activation mapping (Grad-CAM)
Both the decision-making processes and the explanations were verified, and the accuracy of the output was tested.
The research results greatly help pathologists in the diagnosis of paratuberculosis.
arXiv Detail & Related papers (2022-08-02T18:05:26Z) - FIBA: Frequency-Injection based Backdoor Attack in Medical Image
Analysis [82.2511780233828]
We propose a novel Frequency-Injection based Backdoor Attack method (FIBA) that is capable of delivering attacks in various medical image analysis tasks.
Specifically, FIBA leverages a trigger function in the frequency domain that can inject the low-frequency information of a trigger image into the poisoned image by linearly combining the spectral amplitude of both images.
arXiv Detail & Related papers (2021-12-02T11:52:17Z) - Deep AUC Maximization for Medical Image Classification: Challenges and
Opportunities [60.079782224958414]
We will present and discuss opportunities and challenges brought by a new deep learning method by AUC (aka underlinebf Deep underlinebf AUC classification)
arXiv Detail & Related papers (2021-11-01T15:31:32Z) - Robust Collaborative Learning of Patch-level and Image-level Annotations
for Diabetic Retinopathy Grading from Fundus Image [33.904136933213735]
We present a robust framework, which collaboratively utilizes patch-level and image-level annotations, for DR severity grading.
By an end-to-end optimization, this framework can bi-directionally exchange the fine-grained lesion and image-level grade information.
The proposed framework shows better performance than the recent state-of-the-art algorithms and three clinical ophthalmologists with over nine years of experience.
arXiv Detail & Related papers (2020-08-03T02:17:42Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.