End-to-end Multiple Instance Learning with Gradient Accumulation
- URL: http://arxiv.org/abs/2203.03981v1
- Date: Tue, 8 Mar 2022 10:14:51 GMT
- Title: End-to-end Multiple Instance Learning with Gradient Accumulation
- Authors: Axel Andersson, Nadezhda Koriakina, Nata\v{s}a Sladoje and Joakim
Lindblad
- Abstract summary: We propose a training strategy that enables end-to-end training of ABMIL models without being limited by GPU memory.
We conduct experiments on both QMNIST and Imagenette to investigate the performance and training time.
This memory-efficient approach, although slower, reaches performance indistinguishable from the memory-expensive baseline.
- Score: 2.2612425542955292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Being able to learn on weakly labeled data, and provide interpretability, are
two of the main reasons why attention-based deep multiple instance learning
(ABMIL) methods have become particularly popular for classification of
histopathological images. Such image data usually come in the form of
gigapixel-sized whole-slide-images (WSI) that are cropped into smaller patches
(instances). However, the sheer size of the data makes training of ABMIL models
challenging. All the instances from one WSI cannot be processed at once by
conventional GPUs. Existing solutions compromise training by relying on
pre-trained models, strategic sampling or selection of instances, or
self-supervised learning. We propose a training strategy based on gradient
accumulation that enables direct end-to-end training of ABMIL models without
being limited by GPU memory. We conduct experiments on both QMNIST and
Imagenette to investigate the performance and training time, and compare with
the conventional memory-expensive baseline and a recent sampled-based approach.
This memory-efficient approach, although slower, reaches performance
indistinguishable from the memory-expensive baseline.
Related papers
- Time-, Memory- and Parameter-Efficient Visual Adaptation [75.28557015773217]
We propose an adaptation method which does not backpropagate gradients through the backbone.
We achieve this by designing a lightweight network in parallel that operates on features from the frozen, pretrained backbone.
arXiv Detail & Related papers (2024-02-05T10:55:47Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - A Simple and Efficient Baseline for Data Attribution on Images [107.12337511216228]
Current state-of-the-art approaches require a large ensemble of as many as 300,000 models to accurately attribute model predictions.
In this work, we focus on a minimalist baseline, utilizing the feature space of a backbone pretrained via self-supervised learning to perform data attribution.
Our method is model-agnostic and scales easily to large datasets.
arXiv Detail & Related papers (2023-11-03T17:29:46Z) - Multiple Instance Learning Framework with Masked Hard Instance Mining
for Whole Slide Image Classification [11.996318969699296]
Masked hard instance mining (MHIM-MIL) is presented.
MHIM-MIL uses a Siamese structure (Teacher-Student) with a consistency constraint to explore potential hard instances.
Experimental results on the CAMELYON-16 and TCGA Lung Cancer datasets demonstrate that MHIM-MIL outperforms other latest methods in terms of performance and training cost.
arXiv Detail & Related papers (2023-07-28T01:40:04Z) - Exploring Visual Prompts for Whole Slide Image Classification with
Multiple Instance Learning [25.124855361054763]
We present a novel, simple yet effective method for learning domain-specific knowledge transformation from pre-trained models to histopathology images.
Our approach entails using a prompt component to assist the pre-trained model in discerning differences between the pre-trained dataset and the target histopathology dataset.
arXiv Detail & Related papers (2023-03-23T09:23:52Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - ReMix: A General and Efficient Framework for Multiple Instance Learning
based Whole Slide Image Classification [14.78430890440035]
Whole slide image (WSI) classification often relies on weakly supervised multiple instance learning (MIL) methods to handle gigapixel resolution images and slide-level labels.
We propose ReMix, a general and efficient framework for MIL based WSI classification.
arXiv Detail & Related papers (2022-07-05T04:21:35Z) - Memory Efficient Meta-Learning with Large Images [62.70515410249566]
Meta learning approaches to few-shot classification are computationally efficient at test time requiring just a few optimization steps or single forward pass to learn a new task.
This limitation arises because a task's entire support set, which can contain up to 1000 images, must be processed before an optimization step can be taken.
We propose LITE, a general and memory efficient episodic training scheme that enables meta-training on large tasks composed of large images on a single GPU.
arXiv Detail & Related papers (2021-07-02T14:37:13Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - Remote Sensing Image Scene Classification with Self-Supervised Paradigm
under Limited Labeled Samples [11.025191332244919]
We introduce new self-supervised learning (SSL) mechanism to obtain the high-performance pre-training model for RSIs scene classification from large unlabeled data.
Experiments on three commonly used RSIs scene classification datasets demonstrated that this new learning paradigm outperforms the traditional dominant ImageNet pre-trained model.
The insights distilled from our studies can help to foster the development of SSL in the remote sensing community.
arXiv Detail & Related papers (2020-10-02T09:27:19Z) - Dynamic Sampling for Deep Metric Learning [7.010669841466896]
Deep metric learning maps visually similar images onto nearby locations and visually dissimilar images apart from each other in an embedding manifold.
A dynamic sampling strategy is proposed to organize the training pairs in an easy-to-hard order to feed into the network.
It allows the network to learn general boundaries between categories from the easy training pairs at its early stages and finalize the details of the model mainly relying on the hard training samples in the later.
arXiv Detail & Related papers (2020-04-24T09:47:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.