Self-Taught Semi-Supervised Anomaly Detection on Upper Limb X-rays
- URL: http://arxiv.org/abs/2102.09895v2
- Date: Mon, 22 Feb 2021 09:21:09 GMT
- Title: Self-Taught Semi-Supervised Anomaly Detection on Upper Limb X-rays
- Authors: Antoine Spahr, Behzad Bozorgtabar, Jean-Philippe Thiran
- Abstract summary: Supervised deep networks take for granted a large number of annotations by radiologists.
Our approach's rationale is to use task pretext tasks to leverage unlabeled data.
We show that our method outperforms baselines across unsupervised and self-supervised anomaly detection settings.
- Score: 11.859913430860335
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Detecting anomalies in musculoskeletal radiographs is of paramount importance
for large-scale screening in the radiology workflow. Supervised deep networks
take for granted a large number of annotations by radiologists, which is often
prohibitively very time-consuming to acquire. Moreover, supervised systems are
tailored to closed set scenarios, e.g., trained models suffer from overfitting
to previously seen rare anomalies at training. Instead, our approach's
rationale is to use task agnostic pretext tasks to leverage unlabeled data
based on a cross-sample similarity measure. Besides, we formulate a complex
distribution of data from normal class within our framework to avoid a
potential bias on the side of anomalies. Through extensive experiments, we show
that our method outperforms baselines across unsupervised and self-supervised
anomaly detection settings on a real-world medical dataset, the MURA dataset.
We also provide rich ablation studies to analyze each training stage's effect
and loss terms on the final performance.
Related papers
- Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts [25.629973843455495]
Generalist Anomaly Detection (GAD) aims to train one single detection model that can generalize to detect anomalies in diverse datasets from different application domains without further training on the target data.
We introduce a novel approach that learns an in-context residual learning model for GAD, termed InCTRL.
InCTRL is the best performer and significantly outperforms state-of-the-art competing methods.
arXiv Detail & Related papers (2024-03-11T08:07:46Z) - An Iterative Method for Unsupervised Robust Anomaly Detection Under Data
Contamination [24.74938110451834]
Most deep anomaly detection models are based on learning normality from datasets.
In practice, the normality assumption is often violated due to the nature of real data distributions.
We propose a learning framework to reduce this gap and achieve better normality representation.
arXiv Detail & Related papers (2023-09-18T02:36:19Z) - AGAD: Adversarial Generative Anomaly Detection [12.68966318231776]
Anomaly detection suffered from the lack of anomalies due to the diversity of abnormalities and the difficulties of obtaining large-scale anomaly data.
We propose Adversarial Generative Anomaly Detection (AGAD), a self-contrast-based anomaly detection paradigm.
Our method generates pseudo-anomaly data for both supervised and semi-supervised anomaly detection scenarios.
arXiv Detail & Related papers (2023-04-09T10:40:02Z) - Prototypical Residual Networks for Anomaly Detection and Localization [80.5730594002466]
We propose a framework called Prototypical Residual Network (PRN)
PRN learns feature residuals of varying scales and sizes between anomalous and normal patterns to accurately reconstruct the segmentation maps of anomalous regions.
We present a variety of anomaly generation strategies that consider both seen and unseen appearance variance to enlarge and diversify anomalies.
arXiv Detail & Related papers (2022-12-05T05:03:46Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Understanding the Effect of Bias in Deep Anomaly Detection [15.83398707988473]
Anomaly detection presents a unique challenge in machine learning, due to the scarcity of labeled anomaly data.
Recent work attempts to mitigate such problems by augmenting training of deep anomaly detection models with additional labeled anomaly samples.
In this paper, we aim to understand the effect of a biased anomaly set on anomaly detection.
arXiv Detail & Related papers (2021-05-16T03:55:02Z) - Anomaly Detection on X-Rays Using Self-Supervised Aggregation Learning [16.854288765350283]
SALAD is an end-to-end deep self-supervised methodology for anomaly detection on X-Ray images.
The proposed method is based on an optimization strategy in which a deep neural network is encouraged to represent prototypical local patterns.
Our anomaly score is then derived by measuring similarity to a weighted combination of normal prototypical patterns within a memory bank.
arXiv Detail & Related papers (2020-10-19T20:49:34Z) - TadGAN: Time Series Anomaly Detection Using Generative Adversarial
Networks [73.01104041298031]
TadGAN is an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs)
To capture the temporal correlations of time series, we use LSTM Recurrent Neural Networks as base models for Generators and Critics.
To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one.
arXiv Detail & Related papers (2020-09-16T15:52:04Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Manifolds for Unsupervised Visual Anomaly Detection [79.22051549519989]
Unsupervised learning methods that don't necessarily encounter anomalies in training would be immensely useful.
We develop a novel hyperspherical Variational Auto-Encoder (VAE) via stereographic projections with a gyroplane layer.
We present state-of-the-art results on visual anomaly benchmarks in precision manufacturing and inspection, demonstrating real-world utility in industrial AI scenarios.
arXiv Detail & Related papers (2020-06-19T20:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.