HealthyGAN: Learning from Unannotated Medical Images to Detect Anomalies
Associated with Human Disease
- URL: http://arxiv.org/abs/2209.01822v1
- Date: Mon, 5 Sep 2022 08:10:52 GMT
- Title: HealthyGAN: Learning from Unannotated Medical Images to Detect Anomalies
Associated with Human Disease
- Authors: Md Mahfuzur Rahman Siddiquee, Jay Shah, Teresa Wu, Catherine Chong,
Todd Schwedt, and Baoxin Li
- Abstract summary: A typical technique in the current medical imaging literature has focused on deriving diagnostic models from healthy subjects only.
HealthyGAN learns to translate the images from the mixed dataset to only healthy images.
Being one-directional, HealthyGAN relaxes the requirement of cycle consistency of existing unpaired image-to-image translation methods.
- Score: 13.827062843105365
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automated anomaly detection from medical images, such as MRIs and X-rays, can
significantly reduce human effort in disease diagnosis. Owing to the complexity
of modeling anomalies and the high cost of manual annotation by domain experts
(e.g., radiologists), a typical technique in the current medical imaging
literature has focused on deriving diagnostic models from healthy subjects
only, assuming the model will detect the images from patients as outliers.
However, in many real-world scenarios, unannotated datasets with a mix of both
healthy and diseased individuals are abundant. Therefore, this paper poses the
research question of how to improve unsupervised anomaly detection by utilizing
(1) an unannotated set of mixed images, in addition to (2) the set of healthy
images as being used in the literature. To answer the question, we propose
HealthyGAN, a novel one-directional image-to-image translation method, which
learns to translate the images from the mixed dataset to only healthy images.
Being one-directional, HealthyGAN relaxes the requirement of cycle consistency
of existing unpaired image-to-image translation methods, which is unattainable
with mixed unannotated data. Once the translation is learned, we generate a
difference map for any given image by subtracting its translated output.
Regions of significant responses in the difference map correspond to potential
anomalies (if any). Our HealthyGAN outperforms the conventional
state-of-the-art methods by significant margins on two publicly available
datasets: COVID-19 and NIH ChestX-ray14, and one institutional dataset
collected from Mayo Clinic. The implementation is publicly available at
https://github.com/mahfuzmohammad/HealthyGAN.
Related papers
- Fair Text to Medical Image Diffusion Model with Subgroup Distribution Aligned Tuning [12.064840522920251]
The text to medical image (T2MedI) with latent diffusion model has great potential to alleviate the scarcity of medical imaging data.
However, as the text to nature image models, we show that the T2MedI model can also bias to some subgroups to overlook the minority ones in the training set.
In this work, we first build a T2MedI model based on the pre-trained Imagen model, which has the fixed contrastive language-image pre-training (CLIP) text encoder.
Its decoder has been fine-tuned on medical images from the Radiology Objects in C
arXiv Detail & Related papers (2024-06-21T03:23:37Z) - Spatial-aware Attention Generative Adversarial Network for Semi-supervised Anomaly Detection in Medical Image [63.59114880750643]
We introduce a novel Spatial-aware Attention Generative Adrialversa Network (SAGAN) for one-class semi-supervised generation of health images.
SAGAN generates high-quality health images corresponding to unlabeled data, guided by the reconstruction of normal images and restoration of pseudo-anomaly images.
Extensive experiments on three medical datasets demonstrate that the proposed SAGAN outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-05-21T15:41:34Z) - MedIAnomaly: A comparative study of anomaly detection in medical images [26.319602363581442]
Anomaly detection (AD) aims at detecting abnormal samples that deviate from the expected normal patterns.
Despite numerous methods for medical AD, we observe a lack of a fair and comprehensive evaluation.
This paper builds a benchmark with unified comparison.
arXiv Detail & Related papers (2024-04-06T06:18:11Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - VALD-MD: Visual Attribution via Latent Diffusion for Medical Diagnostics [0.0]
Visual attribution in medical imaging seeks to make evident the diagnostically-relevant components of a medical image.
We here present a novel generative visual attribution technique, one that leverages latent diffusion models in combination with domain-specific large language models.
The resulting system also exhibits a range of latent capabilities including zero-shot localized disease induction.
arXiv Detail & Related papers (2024-01-02T19:51:49Z) - Metadata-enhanced contrastive learning from retinal optical coherence tomography images [7.932410831191909]
We extend conventional contrastive frameworks with a novel metadata-enhanced strategy.
Our approach employs widely available patient metadata to approximate the true set of inter-image contrastive relationships.
Our approach outperforms both standard contrastive methods and a retinal image foundation model in five out of six image-level downstream tasks.
arXiv Detail & Related papers (2022-08-04T08:53:15Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Convolutional-LSTM for Multi-Image to Single Output Medical Prediction [55.41644538483948]
A common scenario in developing countries is to have the volume metadata lost due multiple reasons.
It is possible to get a multi-image to single diagnostic model which mimics human doctor diagnostic process.
arXiv Detail & Related papers (2020-10-20T04:30:09Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z) - A versatile anomaly detection method for medical images with a
flow-based generative model in semi-supervision setting [0.0]
We present an anomaly detection method based on two trained flow-based generative models.
With this method, the posterior probability can be computed as a normality metric for any given image.
The method was validated with two types of medical images: chest X-ray radiographs (CXRs) and brain computed tomographies (BCTs)
arXiv Detail & Related papers (2020-01-22T02:01:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.