About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations
- URL: http://arxiv.org/abs/2105.14117v1
- Date: Fri, 28 May 2021 21:34:04 GMT
- Title: About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations
- Authors: Dmitrii Shubin, Danny Eytan, Sebastian D. Goodfellow
- Abstract summary: Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
- Score: 2.3204178451683264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning methods for computer vision have demonstrated the
effectiveness of pre-training feature representations, resulting in
well-generalizing Deep Neural Networks, even if the annotated data are limited.
However, representation learning techniques require a significant amount of
time for model training, with most of it time spent on precise hyper-parameter
optimization and selection of augmentation techniques. We hypothesized that if
the annotated dataset has enough morphological diversity to capture the general
population's as is common in medical imaging, for example, due to conserved
similarities of tissue mythologies, the variance error of the trained model is
the prevalent component of the Bias-Variance Trade-off. We propose the Variance
Aware Training (VAT) method that exploits this property by introducing the
variance error into the model loss function, i.e., enabling minimizing the
variance explicitly. Additionally, we provide the theoretical formulation and
proof of the proposed method to aid in interpreting the approach. Our method
requires selecting only one hyper-parameter and was able to match or improve
the state-of-the-art performance of self-supervised methods while achieving an
order of magnitude reduction in the GPU training time. We validated VAT on
three medical imaging datasets from diverse domains and various learning
objectives. These included a Magnetic Resonance Imaging (MRI) dataset for the
heart semantic segmentation (MICCAI 2017 ACDC challenge), fundus photography
dataset for ordinary regression of diabetic retinopathy progression (Kaggle
2019 APTOS Blindness Detection challenge), and classification of
histopathologic scans of lymph node sections (PatchCamelyon dataset).
Related papers
- Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Guided Reconstruction with Conditioned Diffusion Models for Unsupervised Anomaly Detection in Brain MRIs [35.46541584018842]
Unsupervised Anomaly Detection (UAD) aims to identify any anomaly as an outlier from a healthy training distribution.
generative models are used to learn the reconstruction of healthy brain anatomy for a given input image.
We propose conditioning the denoising process of diffusion models with additional information derived from a latent representation of the input image.
arXiv Detail & Related papers (2023-12-07T11:03:42Z) - Connecting the Dots: Graph Neural Network Powered Ensemble and
Classification of Medical Images [0.0]
Deep learning for medical imaging is limited due to the requirement for large amounts of training data.
We employ the Image Foresting Transform to optimally segment images into superpixels.
These superpixels are subsequently transformed into graph-structured data, enabling the proficient extraction of features and modeling of relationships.
arXiv Detail & Related papers (2023-11-13T13:20:54Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Self-Supervised-RCNN for Medical Image Segmentation with Limited Data
Annotation [0.16490701092527607]
We propose an alternative deep learning training strategy based on self-supervised pretraining on unlabeled MRI scans.
Our pretraining approach first, randomly applies different distortions to random areas of unlabeled images and then predicts the type of distortions and loss of information.
The effectiveness of the proposed method for segmentation tasks in different pre-training and fine-tuning scenarios is evaluated.
arXiv Detail & Related papers (2022-07-17T13:28:52Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Medical Image Harmonization Using Deep Learning Based Canonical Mapping:
Toward Robust and Generalizable Learning in Imaging [4.396671464565882]
We propose a new paradigm in which data from a diverse range of acquisition conditions are "harmonized" to a common reference domain.
We test this approach on two example problems, namely MRI-based brain age prediction and classification of schizophrenia.
arXiv Detail & Related papers (2020-10-11T22:01:37Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.