Automatic Data Augmentation Learning using Bilevel Optimization for
Histopathological Images
- URL: http://arxiv.org/abs/2307.11808v1
- Date: Fri, 21 Jul 2023 17:22:22 GMT
- Title: Automatic Data Augmentation Learning using Bilevel Optimization for
Histopathological Images
- Authors: Saypraseuth Mounsaveng and Issam Laradji and David V\'azquez and Marco
Perdersoli and Ismail Ben Ayed
- Abstract summary: Data Augmentation (DA) can be used during training to generate additional samples by applying transformations to existing ones.
DA is not only dataset-specific but it also requires domain knowledge.
We propose an automatic DA learning method to improve the model training.
- Score: 12.166446006133228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training a deep learning model to classify histopathological images is
challenging, because of the color and shape variability of the cells and
tissues, and the reduced amount of available data, which does not allow proper
learning of those variations. Variations can come from the image acquisition
process, for example, due to different cell staining protocols or tissue
deformation. To tackle this challenge, Data Augmentation (DA) can be used
during training to generate additional samples by applying transformations to
existing ones, to help the model become invariant to those color and shape
transformations. The problem with DA is that it is not only dataset-specific
but it also requires domain knowledge, which is not always available. Without
this knowledge, selecting the right transformations can only be done using
heuristics or through a computationally demanding search. To address this, we
propose an automatic DA learning method. In this method, the DA parameters,
i.e. the transformation parameters needed to improve the model training, are
considered learnable and are learned automatically using a bilevel optimization
approach in a quick and efficient way using truncated backpropagation. We
validated the method on six different datasets. Experimental results show that
our model can learn color and affine transformations that are more helpful to
train an image classifier than predefined DA transformations, which are also
more expensive as they need to be selected before the training by grid search
on a validation set. We also show that similarly to a model trained with
RandAugment, our model has also only a few method-specific hyperparameters to
tune but is performing better. This makes our model a good solution for
learning the best DA parameters, especially in the context of histopathological
images, where defining potentially useful transformation heuristically is not
trivial.
Related papers
- Your Image is My Video: Reshaping the Receptive Field via Image-To-Video Differentiable AutoAugmentation and Fusion [35.88039888482076]
We introduce the first Differentiable Augmentation Search method (DAS) to generate variations of images that can be processed as videos.
DAS is extremely fast and flexible, allowing the search on very large search spaces in less than a GPU day.
We leverage DAS to guide the reshaping of the spatial receptive field by selecting task-dependant transformations.
arXiv Detail & Related papers (2024-03-22T13:27:57Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - DVPT: Dynamic Visual Prompt Tuning of Large Pre-trained Models for
Medical Image Analysis [30.608225734194416]
We propose a dynamic visual prompt tuning method, named DVPT, for medical image analysis.
It can extract knowledge beneficial to downstream tasks from large models with a few trainable parameters.
It can save up to 60% labeled data and 99% storage cost of ViT-B/16.
arXiv Detail & Related papers (2023-07-19T07:11:11Z) - Exploring Visual Prompts for Whole Slide Image Classification with
Multiple Instance Learning [25.124855361054763]
We present a novel, simple yet effective method for learning domain-specific knowledge transformation from pre-trained models to histopathology images.
Our approach entails using a prompt component to assist the pre-trained model in discerning differences between the pre-trained dataset and the target histopathology dataset.
arXiv Detail & Related papers (2023-03-23T09:23:52Z) - Effective Data Augmentation With Diffusion Models [65.09758931804478]
We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models.
Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples.
We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
arXiv Detail & Related papers (2023-02-07T20:42:28Z) - Optimizing transformations for contrastive learning in a differentiable
framework [4.828899860513713]
We propose a framework to find optimal transformations for contrastive learning using a differentiable transformation network.
Our method increases performances at low annotated data regime both in supervision accuracy and in convergence speed.
Experiments were performed on 34000 2D slices of brain Magnetic Resonance Images and 11200 chest X-ray images.
arXiv Detail & Related papers (2022-07-27T08:47:57Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - CADDA: Class-wise Automatic Differentiable Data Augmentation for EEG
Signals [92.60744099084157]
We propose differentiable data augmentation amenable to gradient-based learning.
We demonstrate the relevance of our approach on the clinically relevant sleep staging classification task.
arXiv Detail & Related papers (2021-06-25T15:28:48Z) - Radon cumulative distribution transform subspace modeling for image
classification [18.709734704950804]
We present a new supervised image classification method applicable to a broad class of image deformation models.
The method makes use of the previously described Radon Cumulative Distribution Transform (R-CDT) for image data.
In addition to the test accuracy performances, we show improvements in terms of computational efficiency.
arXiv Detail & Related papers (2020-04-07T19:47:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.