nnOOD: A Framework for Benchmarking Self-supervised Anomaly Localisation
Methods
- URL: http://arxiv.org/abs/2209.01124v1
- Date: Fri, 2 Sep 2022 15:34:02 GMT
- Title: nnOOD: A Framework for Benchmarking Self-supervised Anomaly Localisation
Methods
- Authors: Matthew Baugh, Jeremy Tan, Athanasios Vlontzos, Johanna P. M\"uller,
Bernhard Kainz
- Abstract summary: nnOOD adapts nnU-Net to allow for comparison of self-supervised anomaly localisation methods.
We implement the current state-of-the-art tasks and evaluate them on a challenging X-ray dataset.
- Score: 4.31513157813239
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The wide variety of in-distribution and out-of-distribution data in medical
imaging makes universal anomaly detection a challenging task. Recently a number
of self-supervised methods have been developed that train end-to-end models on
healthy data augmented with synthetic anomalies. However, it is difficult to
compare these methods as it is not clear whether gains in performance are from
the task itself or the training pipeline around it. It is also difficult to
assess whether a task generalises well for universal anomaly detection, as they
are often only tested on a limited range of anomalies. To assist with this we
have developed nnOOD, a framework that adapts nnU-Net to allow for comparison
of self-supervised anomaly localisation methods. By isolating the synthetic,
self-supervised task from the rest of the training process we perform a more
faithful comparison of the tasks, whilst also making the workflow for
evaluating over a given dataset quick and easy. Using this we have implemented
the current state-of-the-art tasks and evaluated them on a challenging X-ray
dataset.
Related papers
- GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features [68.14842693208465]
GeneralAD is an anomaly detection framework designed to operate in semantic, near-distribution, and industrial settings.
We propose a novel self-supervised anomaly generation module that employs straightforward operations like noise addition and shuffling to patch features.
We extensively evaluated our approach on ten datasets, achieving state-of-the-art results in six and on-par performance in the remaining.
arXiv Detail & Related papers (2024-07-17T09:27:41Z) - A Comprehensive Library for Benchmarking Multi-class Visual Anomaly Detection [52.228708947607636]
This paper introduces a comprehensive visual anomaly detection benchmark, ADer, which is a modular framework for new methods.
The benchmark includes multiple datasets from industrial and medical domains, implementing fifteen state-of-the-art methods and nine comprehensive metrics.
We objectively reveal the strengths and weaknesses of different methods and provide insights into the challenges and future directions of multi-class visual anomaly detection.
arXiv Detail & Related papers (2024-06-05T13:40:07Z) - Combating Missing Modalities in Egocentric Videos at Test Time [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.
We propose a novel approach to address this issue at test time without requiring retraining.
MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - A Generic Machine Learning Framework for Fully-Unsupervised Anomaly
Detection with Contaminated Data [0.0]
We introduce a framework for a fully unsupervised refinement of contaminated training data for AD tasks.
The framework is generic and can be applied to any residual-based machine learning model.
We show its clear superiority over the naive approach of training with contaminated data without refinement.
arXiv Detail & Related papers (2023-08-25T12:47:59Z) - Many tasks make light work: Learning to localise medical anomalies from
multiple synthetic tasks [2.912977051718473]
A growing interest in single-class modelling and out-of-distribution detection.
Fully supervised machine learning models cannot reliably identify classes not included in their training.
We make use of multiple visually-distinct synthetic anomaly learning tasks for both training and validation.
arXiv Detail & Related papers (2023-07-03T09:52:54Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Meta-learning One-class Classifiers with Eigenvalue Solvers for
Supervised Anomaly Detection [55.888835686183995]
We propose a neural network-based meta-learning method for supervised anomaly detection.
We experimentally demonstrate that the proposed method achieves better performance than existing anomaly detection and few-shot learning methods.
arXiv Detail & Related papers (2021-03-01T01:43:04Z) - Self-Taught Semi-Supervised Anomaly Detection on Upper Limb X-rays [11.859913430860335]
Supervised deep networks take for granted a large number of annotations by radiologists.
Our approach's rationale is to use task pretext tasks to leverage unlabeled data.
We show that our method outperforms baselines across unsupervised and self-supervised anomaly detection settings.
arXiv Detail & Related papers (2021-02-19T12:32:58Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - Interpretable Anomaly Detection with Mondrian P{\'o}lya Forests on Data
Streams [6.177270420667713]
Anomaly detection at scale is an extremely challenging problem of great practicality.
Recent work has coalesced on variations of (random) $k$emphd-trees to summarise data for anomaly detection.
These methods rely on ad-hoc score functions that are not easy to interpret.
We contextualise these methods in a probabilistic framework which we call the Mondrian Polya Forest.
arXiv Detail & Related papers (2020-08-04T13:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.