Constrained Contrastive Distribution Learning for Unsupervised Anomaly
Detection and Localisation in Medical Images
- URL: http://arxiv.org/abs/2103.03423v1
- Date: Fri, 5 Mar 2021 01:56:58 GMT
- Title: Constrained Contrastive Distribution Learning for Unsupervised Anomaly
Detection and Localisation in Medical Images
- Authors: Yu Tian and Guansong Pang and Fengbei Liu and Yuanhong chen and Seon
Ho Shin and Johan W. Verjans and Rajvinder Singh and Gustavo Carneiro
- Abstract summary: Unsupervised anomaly detection (UAD) learns one-class classifiers exclusively with normal (i.e., healthy) images.
We propose a novel self-supervised representation learning method, called Constrained Contrastive Distribution learning for anomaly detection (CCD)
Our method outperforms current state-of-the-art UAD approaches on three different colonoscopy and fundus screening datasets.
- Score: 23.79184121052212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised anomaly detection (UAD) learns one-class classifiers exclusively
with normal (i.e., healthy) images to detect any abnormal (i.e., unhealthy)
samples that do not conform to the expected normal patterns. UAD has two main
advantages over its fully supervised counterpart. Firstly, it is able to
directly leverage large datasets available from health screening programs that
contain mostly normal image samples, avoiding the costly manual labelling of
abnormal samples and the subsequent issues involved in training with extremely
class-imbalanced data. Further, UAD approaches can potentially detect and
localise any type of lesions that deviate from the normal patterns. One
significant challenge faced by UAD methods is how to learn effective
low-dimensional image representations to detect and localise subtle
abnormalities, generally consisting of small lesions. To address this
challenge, we propose a novel self-supervised representation learning method,
called Constrained Contrastive Distribution learning for anomaly detection
(CCD), which learns fine-grained feature representations by simultaneously
predicting the distribution of augmented data and image contexts using
contrastive learning with pretext constraints. The learned representations can
be leveraged to train more anomaly-sensitive detection models. Extensive
experiment results show that our method outperforms current state-of-the-art
UAD approaches on three different colonoscopy and fundus screening datasets.
Our code is available at https://github.com/tianyu0207/CCD.
Related papers
- Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection [60.78684630040313]
Diffusion models tend to reconstruct normal counterparts of test images with certain noises added.
From the global perspective, the difficulty of reconstructing images with different anomalies is uneven.
We propose a global and local adaptive diffusion model (abbreviated to GLAD) for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-06-11T17:27:23Z) - Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts [25.629973843455495]
Generalist Anomaly Detection (GAD) aims to train one single detection model that can generalize to detect anomalies in diverse datasets from different application domains without further training on the target data.
We introduce a novel approach that learns an in-context residual learning model for GAD, termed InCTRL.
InCTRL is the best performer and significantly outperforms state-of-the-art competing methods.
arXiv Detail & Related papers (2024-03-11T08:07:46Z) - CRADL: Contrastive Representations for Unsupervised Anomaly Detection
and Localization [2.8659934481869715]
Unsupervised anomaly detection in medical imaging aims to detect and localize arbitrary anomalies without requiring anomalous data during training.
Most current state-of-the-art methods use latent variable generative models operating directly on the images.
We propose CRADL whose core idea is to model the distribution of normal samples directly in the low-dimensional representation space of an encoder trained with a contrastive pretext-task.
arXiv Detail & Related papers (2023-01-05T16:07:49Z) - Dual-distribution discrepancy with self-supervised refinement for
anomaly detection in medical images [29.57501199670898]
We introduce one-class semi-supervised learning (OC-SSL) to utilize known normal and unlabeled images for training.
Ensembles of reconstruction networks are designed to model the distribution of normal images and the distribution of both normal and unlabeled images.
We propose a new perspective on self-supervised learning, which is designed to refine the anomaly scores rather than detect anomalies directly.
arXiv Detail & Related papers (2022-10-09T11:18:45Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - Self-supervised Pseudo Multi-class Pre-training for Unsupervised Anomaly
Detection and Segmentation in Medical Images [31.676609117780114]
Unsupervised anomaly detection (UAD) methods are trained with normal (or healthy) images only, but during testing, they are able to classify normal and abnormal images.
We propose a new self-supervised pre-training method for MIA UAD applications, named Pseudo Multi-class Strong Augmentation via Contrastive Learning (PMSACL)
arXiv Detail & Related papers (2021-09-03T04:25:57Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Manifolds for Unsupervised Visual Anomaly Detection [79.22051549519989]
Unsupervised learning methods that don't necessarily encounter anomalies in training would be immensely useful.
We develop a novel hyperspherical Variational Auto-Encoder (VAE) via stereographic projections with a gyroplane layer.
We present state-of-the-art results on visual anomaly benchmarks in precision manufacturing and inspection, demonstrating real-world utility in industrial AI scenarios.
arXiv Detail & Related papers (2020-06-19T20:41:58Z) - OIAD: One-for-all Image Anomaly Detection with Disentanglement Learning [23.48763375455514]
We propose a One-for-all Image Anomaly Detection system based on disentangled learning using only clean samples.
Our experiments with three datasets show that OIAD can detect over $90%$ of anomalies while maintaining a low false alarm rate.
arXiv Detail & Related papers (2020-01-18T09:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.