Learning Data Augmentation with Online Bilevel Optimization for Image
Classification
- URL: http://arxiv.org/abs/2006.14699v2
- Date: Tue, 10 Nov 2020 16:11:57 GMT
- Title: Learning Data Augmentation with Online Bilevel Optimization for Image
Classification
- Authors: Saypraseuth Mounsaveng, Issam Laradji, Ismail Ben Ayed, David Vazquez,
Marco Pedersoli
- Abstract summary: We propose an efficient approach to automatically train a network that learns an effective distribution of transformations to improve its generalization.
We show that our joint training method produces an image classification accuracy comparable to or better than carefully hand-crafted data augmentation.
- Score: 14.488360021440448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation is a key practice in machine learning for improving
generalization performance. However, finding the best data augmentation
hyperparameters requires domain knowledge or a computationally demanding
search. We address this issue by proposing an efficient approach to
automatically train a network that learns an effective distribution of
transformations to improve its generalization. Using bilevel optimization, we
directly optimize the data augmentation parameters using a validation set. This
framework can be used as a general solution to learn the optimal data
augmentation jointly with an end task model like a classifier. Results show
that our joint training method produces an image classification accuracy that
is comparable to or better than carefully hand-crafted data augmentation. Yet,
it does not need an expensive external validation loop on the data augmentation
hyperparameters.
Related papers
- Domain Generalization by Rejecting Extreme Augmentations [13.114457707388283]
We show that for out-of-domain and domain generalization settings, data augmentation can provide a conspicuous and robust improvement in performance.
We propose a simple training procedure: (i) use uniform sampling on standard data augmentation transformations; (ii) increase the strength transformations to account for the higher data variance expected when working out-of-domain, and (iii) devise a new reward function to reject extreme transformations that can harm the training.
arXiv Detail & Related papers (2023-10-10T14:46:22Z) - Incorporating Supervised Domain Generalization into Data Augmentation [4.14360329494344]
We propose a method, contrastive semantic alignment(CSA) loss, to improve robustness and training efficiency of data augmentation.
Experiments on the CIFAR-100 and CUB datasets show that the proposed method improves the robustness and training efficiency of typical data augmentations.
arXiv Detail & Related papers (2023-10-02T09:20:12Z) - Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - TeachAugment: Data Augmentation Optimization Using Teacher Knowledge [11.696069523681178]
We propose a data augmentation optimization method based on the adversarial strategy called TeachAugment.
We show that TeachAugment outperforms existing methods in experiments of image classification, semantic segmentation, and unsupervised representation learning tasks.
arXiv Detail & Related papers (2022-02-25T06:22:51Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - Automatic tuning of hyper-parameters of reinforcement learning
algorithms using Bayesian optimization with behavioral cloning [0.0]
In reinforcement learning (RL), the information content of data gathered by the learning agent is dependent on the setting of many hyper- parameters.
In this work, a novel approach for autonomous hyper- parameter setting using Bayesian optimization is proposed.
Experiments reveal promising results compared to other manual tweaking and optimization-based approaches.
arXiv Detail & Related papers (2021-12-15T13:10:44Z) - Dynamic Data Augmentation with Gating Networks [5.251019642214251]
We propose a neural network that dynamically selects the best combination using a mutually beneficial gating network and a feature consistency loss.
In experiments, we demonstrate the effectiveness of the proposed method on the 12 largest time-series datasets from 2018 UCR Time Series Archive.
arXiv Detail & Related papers (2021-11-05T04:24:51Z) - CADDA: Class-wise Automatic Differentiable Data Augmentation for EEG
Signals [92.60744099084157]
We propose differentiable data augmentation amenable to gradient-based learning.
We demonstrate the relevance of our approach on the clinically relevant sleep staging classification task.
arXiv Detail & Related papers (2021-06-25T15:28:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.