AMAE: Adaptation of Pre-Trained Masked Autoencoder for Dual-Distribution
Anomaly Detection in Chest X-Rays
- URL: http://arxiv.org/abs/2307.12721v3
- Date: Fri, 28 Jul 2023 09:31:31 GMT
- Title: AMAE: Adaptation of Pre-Trained Masked Autoencoder for Dual-Distribution
Anomaly Detection in Chest X-Rays
- Authors: Behzad Bozorgtabar, Dwarikanath Mahapatra, Jean-Philippe Thiran
- Abstract summary: We propose AMAE, a two-stage algorithm for adaptation of the pre-trained masked autoencoder (MAE)
AMAE leads to consistent performance gains over competing self-supervised and dual distribution anomaly detection methods.
- Score: 17.91123470181453
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Unsupervised anomaly detection in medical images such as chest radiographs is
stepping into the spotlight as it mitigates the scarcity of the labor-intensive
and costly expert annotation of anomaly data. However, nearly all existing
methods are formulated as a one-class classification trained only on
representations from the normal class and discard a potentially significant
portion of the unlabeled data. This paper focuses on a more practical setting,
dual distribution anomaly detection for chest X-rays, using the entire training
data, including both normal and unlabeled images. Inspired by a modern
self-supervised vision transformer model trained using partial image inputs to
reconstruct missing image regions -- we propose AMAE, a two-stage algorithm for
adaptation of the pre-trained masked autoencoder (MAE). Starting from MAE
initialization, AMAE first creates synthetic anomalies from only normal
training images and trains a lightweight classifier on frozen transformer
features. Subsequently, we propose an adaptation strategy to leverage unlabeled
images containing anomalies. The adaptation scheme is accomplished by assigning
pseudo-labels to unlabeled images and using two separate MAE based modules to
model the normative and anomalous distributions of pseudo-labeled images. The
effectiveness of the proposed adaptation strategy is evaluated with different
anomaly ratios in an unlabeled training set. AMAE leads to consistent
performance gains over competing self-supervised and dual distribution anomaly
detection methods, setting the new state-of-the-art on three public chest X-ray
benchmarks: RSNA, NIH-CXR, and VinDr-CXR.
Related papers
- Spatial-aware Attention Generative Adversarial Network for Semi-supervised Anomaly Detection in Medical Image [63.59114880750643]
We introduce a novel Spatial-aware Attention Generative Adrialversa Network (SAGAN) for one-class semi-supervised generation of health images.
SAGAN generates high-quality health images corresponding to unlabeled data, guided by the reconstruction of normal images and restoration of pseudo-anomaly images.
Extensive experiments on three medical datasets demonstrate that the proposed SAGAN outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-05-21T15:41:34Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Forgery-aware Adaptive Transformer for Generalizable Synthetic Image
Detection [106.39544368711427]
We study the problem of generalizable synthetic image detection, aiming to detect forgery images from diverse generative methods.
We present a novel forgery-aware adaptive transformer approach, namely FatFormer.
Our approach tuned on 4-class ProGAN data attains an average of 98% accuracy to unseen GANs, and surprisingly generalizes to unseen diffusion models with 95% accuracy.
arXiv Detail & Related papers (2023-12-27T17:36:32Z) - Dual-distribution discrepancy with self-supervised refinement for
anomaly detection in medical images [29.57501199670898]
We introduce one-class semi-supervised learning (OC-SSL) to utilize known normal and unlabeled images for training.
Ensembles of reconstruction networks are designed to model the distribution of normal images and the distribution of both normal and unlabeled images.
We propose a new perspective on self-supervised learning, which is designed to refine the anomaly scores rather than detect anomalies directly.
arXiv Detail & Related papers (2022-10-09T11:18:45Z) - Seamless Iterative Semi-Supervised Correction of Imperfect Labels in
Microscopy Images [57.42492501915773]
In-vitro tests are an alternative to animal testing for the toxicity of medical devices.
Human fatigue plays a role in error making, making the use of deep learning appealing.
We propose Seamless Iterative Semi-Supervised correction of Imperfect labels (SISSI)
Our method successfully provides an adaptive early learning correction technique for object detection.
arXiv Detail & Related papers (2022-08-05T18:52:20Z) - Dual-Distribution Discrepancy for Anomaly Detection in Chest X-Rays [29.57501199670898]
We propose a novel strategy, Dual-distribution Discrepancy for Anomaly Detection (DDAD), utilizing both known normal images and unlabeled images.
Experiments on three CXR datasets demonstrate that the proposed DDAD achieves consistent, significant gains and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-06-08T14:52:27Z) - Self-supervised Pseudo Multi-class Pre-training for Unsupervised Anomaly
Detection and Segmentation in Medical Images [31.676609117780114]
Unsupervised anomaly detection (UAD) methods are trained with normal (or healthy) images only, but during testing, they are able to classify normal and abnormal images.
We propose a new self-supervised pre-training method for MIA UAD applications, named Pseudo Multi-class Strong Augmentation via Contrastive Learning (PMSACL)
arXiv Detail & Related papers (2021-09-03T04:25:57Z) - Margin-Aware Intra-Class Novelty Identification for Medical Images [2.647674705784439]
We propose a hybrid model - Transformation-based Embedding learning for Novelty Detection (TEND)
With a pre-trained autoencoder as image feature extractor, TEND learns to discriminate the feature embeddings of in-distribution data from the transformed counterparts as fake out-of-distribution inputs.
arXiv Detail & Related papers (2021-07-31T00:10:26Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.