Interpretation of Disease Evidence for Medical Images Using Adversarial
Deformation Fields
- URL: http://arxiv.org/abs/2007.01975v2
- Date: Thu, 20 Apr 2023 02:05:48 GMT
- Title: Interpretation of Disease Evidence for Medical Images Using Adversarial
Deformation Fields
- Authors: Ricardo Bigolin Lanfredi, Joyce D. Schroeder, Clement Vachet, Tolga
Tasdizen
- Abstract summary: We propose a novel method for formulating and presenting spatial explanations of disease evidence.
An adversarially trained generator produces deformation fields that modify images of diseased patients to resemble images of healthy patients.
We validate the method studying chronic obstructive pulmonary disease (COPD) evidence in chest x-rays (CXRs) and Alzheimer's disease (AD) evidence in brain MRIs.
- Score: 4.2739669051600275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The high complexity of deep learning models is associated with the difficulty
of explaining what evidence they recognize as correlating with specific disease
labels. This information is critical for building trust in models and finding
their biases. Until now, automated deep learning visualization solutions have
identified regions of images used by classifiers, but these solutions are too
coarse, too noisy, or have a limited representation of the way images can
change. We propose a novel method for formulating and presenting spatial
explanations of disease evidence, called deformation field interpretation with
generative adversarial networks (DeFI-GAN). An adversarially trained generator
produces deformation fields that modify images of diseased patients to resemble
images of healthy patients. We validate the method studying chronic obstructive
pulmonary disease (COPD) evidence in chest x-rays (CXRs) and Alzheimer's
disease (AD) evidence in brain MRIs. When extracting disease evidence in
longitudinal data, we show compelling results against a baseline producing
difference maps. DeFI-GAN also highlights disease biomarkers not found by
previous methods and potential biases that may help in investigations of the
dataset and of the adopted learning methods.
Related papers
- RadGazeGen: Radiomics and Gaze-guided Medical Image Generation using Diffusion Models [11.865553250973589]
RadGazeGen is a framework for integrating experts' eye gaze patterns and radiomic feature maps as controls to text-to-image diffusion models.
arXiv Detail & Related papers (2024-10-01T01:10:07Z) - Inpainting Pathology in Lumbar Spine MRI with Latent Diffusion [4.410798232767917]
We propose an efficient method for inpainting pathological features onto healthy anatomy in MRI.
We evaluate the method's ability to insert disc herniation and central canal stenosis in lumbar spine sagittal T2 MRI.
arXiv Detail & Related papers (2024-06-04T16:47:47Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Feature Representation Learning for Robust Retinal Disease Detection
from Optical Coherence Tomography Images [0.0]
Ophthalmic images may contain identical-looking pathologies that can cause failure in automated techniques to distinguish different retinal degenerative diseases.
In this work, we propose a robust disease detection architecture with three learning heads.
Our experimental results on two publicly available OCT datasets illustrate that the proposed model outperforms existing state-of-the-art models in terms of accuracy, interpretability, and robustness for out-of-distribution retinal disease detection.
arXiv Detail & Related papers (2022-06-24T07:59:36Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Cross Chest Graph for Disease Diagnosis with Structural Relational
Reasoning [2.7148274921314615]
Locating lesions is important in the computer-aided diagnosis of X-ray images.
General weakly-supervised methods have failed to consider the characteristics of X-ray images.
We propose the Cross-chest Graph (CCG), which improves the performance of automatic lesion detection.
arXiv Detail & Related papers (2021-01-22T08:24:04Z) - Dynamic Graph Correlation Learning for Disease Diagnosis with Incomplete
Labels [66.57101219176275]
Disease diagnosis on chest X-ray images is a challenging multi-label classification task.
We propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases.
Our method is the first to build a graph over the feature maps with a dynamic adjacency matrix for correlation learning.
arXiv Detail & Related papers (2020-02-26T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.