Dynamic Data Augmentation with Gating Networks
- URL: http://arxiv.org/abs/2111.03253v1
- Date: Fri, 5 Nov 2021 04:24:51 GMT
- Title: Dynamic Data Augmentation with Gating Networks
- Authors: Daisuke Oba, Shinnosuke Matsuo and Brian Kenji Iwana
- Abstract summary: We propose a neural network that dynamically selects the best combination using a mutually beneficial gating network and a feature consistency loss.
In experiments, we demonstrate the effectiveness of the proposed method on the 12 largest time-series datasets from 2018 UCR Time Series Archive.
- Score: 5.251019642214251
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation is a technique to improve the generalization ability of
machine learning methods by increasing the size of the dataset. However, since
every augmentation method is not equally effective for every dataset, you need
to carefully select the best method. We propose a neural network that
dynamically selects the best combination using a mutually beneficial gating
network and a feature consistency loss. The gating network is able to control
how much of each data augmentation is used for the representation within the
network. The feature consistency loss, on the other hand, gives a constraint
that augmented features from the same input should be in similar. In
experiments, we demonstrate the effectiveness of the proposed method on the 12
largest time-series datasets from 2018 UCR Time Series Archive and reveal the
relationships between the data augmentation methods through analysis of the
proposed method.
Related papers
- Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Dynamic Data Augmentation via MCTS for Prostate MRI Segmentation [19.780410411548935]
We present Dynamic Data Augmentation (DDAug), which is efficient and has negligible cost.
DDAug computation develops a hierarchical tree structure to represent various augmentations.
Our method outperforms the current state-of-the-art data augmentation strategies.
arXiv Detail & Related papers (2023-05-25T06:44:43Z) - Personalized Decentralized Multi-Task Learning Over Dynamic
Communication Graphs [59.96266198512243]
We propose a decentralized and federated learning algorithm for tasks that are positively and negatively correlated.
Our algorithm uses gradients to calculate the correlations among tasks automatically, and dynamically adjusts the communication graph to connect mutually beneficial tasks and isolate those that may negatively impact each other.
We conduct experiments on a synthetic Gaussian dataset and a large-scale celebrity attributes (CelebA) dataset.
arXiv Detail & Related papers (2022-12-21T18:58:24Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - Infinite Recommendation Networks: A Data-Centric Approach [8.044430277912936]
We leverage the Neural Tangent Kernel to train infinitely-wide neural networks to devise $infty$-AE: an autoencoder with infinitely-wide bottleneck layers.
We also develop Distill-CF for synthesizing tiny, high-fidelity data summaries.
We observe 96-105% of $infty$-AE's performance on the full dataset with as little as 0.1% of the original dataset size.
arXiv Detail & Related papers (2022-06-03T00:34:13Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Fusion of CNNs and statistical indicators to improve image
classification [65.51757376525798]
Convolutional Networks have dominated the field of computer vision for the last ten years.
Main strategy to prolong this trend relies on further upscaling networks in size.
We hypothesise that adding heterogeneous sources of information may be more cost-effective to a CNN than building a bigger network.
arXiv Detail & Related papers (2020-12-20T23:24:31Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Complex Wavelet SSIM based Image Data Augmentation [0.0]
We look at the MNIST handwritten dataset an image dataset used for digit recognition.
We take a detailed look into one of the most popular augmentation techniques used for this data set elastic deformation.
We propose to use a similarity measure called Complex Wavelet Structural Similarity Index Measure (CWSSIM) to selectively filter out the irrelevant data.
arXiv Detail & Related papers (2020-07-11T21:11:46Z) - Learning Data Augmentation with Online Bilevel Optimization for Image
Classification [14.488360021440448]
We propose an efficient approach to automatically train a network that learns an effective distribution of transformations to improve its generalization.
We show that our joint training method produces an image classification accuracy comparable to or better than carefully hand-crafted data augmentation.
arXiv Detail & Related papers (2020-06-25T21:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.