Many tasks make light work: Learning to localise medical anomalies from
multiple synthetic tasks
- URL: http://arxiv.org/abs/2307.00899v1
- Date: Mon, 3 Jul 2023 09:52:54 GMT
- Title: Many tasks make light work: Learning to localise medical anomalies from
multiple synthetic tasks
- Authors: Matthew Baugh, Jeremy Tan, Johanna P. M\"uller, Mischa Dombrowski,
James Batten and Bernhard Kainz
- Abstract summary: A growing interest in single-class modelling and out-of-distribution detection.
Fully supervised machine learning models cannot reliably identify classes not included in their training.
We make use of multiple visually-distinct synthetic anomaly learning tasks for both training and validation.
- Score: 2.912977051718473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a growing interest in single-class modelling and out-of-distribution
detection as fully supervised machine learning models cannot reliably identify
classes not included in their training. The long tail of infinitely many
out-of-distribution classes in real-world scenarios, e.g., for screening,
triage, and quality control, means that it is often necessary to train
single-class models that represent an expected feature distribution, e.g., from
only strictly healthy volunteer data. Conventional supervised machine learning
would require the collection of datasets that contain enough samples of all
possible diseases in every imaging modality, which is not realistic.
Self-supervised learning methods with synthetic anomalies are currently amongst
the most promising approaches, alongside generative auto-encoders that analyse
the residual reconstruction error. However, all methods suffer from a lack of
structured validation, which makes calibration for deployment difficult and
dataset-dependant. Our method alleviates this by making use of multiple
visually-distinct synthetic anomaly learning tasks for both training and
validation. This enables more robust training and generalisation. With our
approach we can readily outperform state-of-the-art methods, which we
demonstrate on exemplars in brain MRI and chest X-rays. Code is available at
https://github.com/matt-baugh/many-tasks-make-light-work .
Related papers
- Combating Missing Modalities in Egocentric Videos at Test Time [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.
We propose a novel approach to address this issue at test time without requiring retraining.
MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - A Generic Machine Learning Framework for Fully-Unsupervised Anomaly
Detection with Contaminated Data [0.0]
We introduce a framework for a fully unsupervised refinement of contaminated training data for AD tasks.
The framework is generic and can be applied to any residual-based machine learning model.
We show its clear superiority over the naive approach of training with contaminated data without refinement.
arXiv Detail & Related papers (2023-08-25T12:47:59Z) - nnOOD: A Framework for Benchmarking Self-supervised Anomaly Localisation
Methods [4.31513157813239]
nnOOD adapts nnU-Net to allow for comparison of self-supervised anomaly localisation methods.
We implement the current state-of-the-art tasks and evaluate them on a challenging X-ray dataset.
arXiv Detail & Related papers (2022-09-02T15:34:02Z) - Generalized Multi-Task Learning from Substantially Unlabeled
Multi-Source Medical Image Data [11.061381376559053]
MultiMix is a new multi-task learning model that jointly learns disease classification and anatomical segmentation in a semi-supervised manner.
Our experiments with varying quantities of multi-source labeled data in the training sets confirm the effectiveness of MultiMix.
arXiv Detail & Related papers (2021-10-25T18:09:19Z) - Task-agnostic Continual Learning with Hybrid Probabilistic Models [75.01205414507243]
We propose HCL, a Hybrid generative-discriminative approach to Continual Learning for classification.
The flow is used to learn the data distribution, perform classification, identify task changes, and avoid forgetting.
We demonstrate the strong performance of HCL on a range of continual learning benchmarks such as split-MNIST, split-CIFAR, and SVHN-MNIST.
arXiv Detail & Related papers (2021-06-24T05:19:26Z) - Meta-learning One-class Classifiers with Eigenvalue Solvers for
Supervised Anomaly Detection [55.888835686183995]
We propose a neural network-based meta-learning method for supervised anomaly detection.
We experimentally demonstrate that the proposed method achieves better performance than existing anomaly detection and few-shot learning methods.
arXiv Detail & Related papers (2021-03-01T01:43:04Z) - MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical
Images [13.690075845927606]
We propose a novel multitask learning model, namely MultiMix, which jointly learns disease classification and anatomical segmentation in a sparingly supervised manner.
Our experiments justify the effectiveness of our multitasking model for the classification of pneumonia and segmentation of lungs from chest X-ray images.
arXiv Detail & Related papers (2020-10-28T03:47:29Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Manifolds for Unsupervised Visual Anomaly Detection [79.22051549519989]
Unsupervised learning methods that don't necessarily encounter anomalies in training would be immensely useful.
We develop a novel hyperspherical Variational Auto-Encoder (VAE) via stereographic projections with a gyroplane layer.
We present state-of-the-art results on visual anomaly benchmarks in precision manufacturing and inspection, demonstrating real-world utility in industrial AI scenarios.
arXiv Detail & Related papers (2020-06-19T20:41:58Z) - Partly Supervised Multitask Learning [19.64371980996412]
Experimental results on chest and spine X-ray datasets suggest that our S$4$MTL model significantly outperforms semi-supervised single task, semi/fully-supervised multitask, and fully-supervised single task models.
We hypothesize that our proposed model can be effective in tackling limited annotation problems for joint training, not only in medical imaging domains, but also for general-purpose vision tasks.
arXiv Detail & Related papers (2020-05-05T22:42:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.