LA3: Efficient Label-Aware AutoAugment
- URL: http://arxiv.org/abs/2304.10310v1
- Date: Thu, 20 Apr 2023 13:42:18 GMT
- Title: LA3: Efficient Label-Aware AutoAugment
- Authors: Mingjun Zhao, Shan Lu, Zixuan Wang, Xiaoli Wang and Di Niu
- Abstract summary: We propose a novel two-stage data augmentation algorithm, named Label-Aware AutoAugment (LA3), which takes advantage of the label information.
LA3 consists of two learning stages, where in the first stage, individual augmentation methods are evaluated and ranked for each label.
In the second stage, a composite augmentation policy is constructed out of a selection of effective as well as complementary augmentations, which produces significant performance boost.
- Score: 23.705059658590436
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated augmentation is an emerging and effective technique to search for
data augmentation policies to improve generalizability of deep neural network
training. Most existing work focuses on constructing a unified policy
applicable to all data samples in a given dataset, without considering sample
or class variations. In this paper, we propose a novel two-stage data
augmentation algorithm, named Label-Aware AutoAugment (LA3), which takes
advantage of the label information, and learns augmentation policies separately
for samples of different labels. LA3 consists of two learning stages, where in
the first stage, individual augmentation methods are evaluated and ranked for
each label via Bayesian Optimization aided by a neural predictor, which allows
us to identify effective augmentation techniques for each label under a low
search cost. And in the second stage, a composite augmentation policy is
constructed out of a selection of effective as well as complementary
augmentations, which produces significant performance boost and can be easily
deployed in typical model training. Extensive experiments demonstrate that LA3
achieves excellent performance matching or surpassing existing methods on
CIFAR-10 and CIFAR-100, and achieves a new state-of-the-art ImageNet accuracy
of 79.97% on ResNet-50 among auto-augmentation methods, while maintaining a low
computational cost.
Related papers
- Ali-AUG: Innovative Approaches to Labeled Data Augmentation using One-Step Diffusion Model [0.14999444543328289]
Ali-AUG is a novel single-step diffusion model for efficient labeled data augmentation in industrial applications.
Our method addresses the challenge of limited labeled data by generating synthetic, labeled images with precise feature insertion.
arXiv Detail & Related papers (2024-10-24T12:12:46Z) - KECOR: Kernel Coding Rate Maximization for Active 3D Object Detection [48.66703222700795]
We resort to a novel kernel strategy to identify the most informative point clouds to acquire labels.
To accommodate both one-stage (i.e., SECOND) and two-stage detectors, we incorporate the classification entropy tangent and well trade-off between detection performance and the total number of bounding boxes selected for annotation.
Our results show that approximately 44% box-level annotation costs and 26% computational time are reduced compared to the state-of-the-art method.
arXiv Detail & Related papers (2023-07-16T04:27:03Z) - Enhancing Label Sharing Efficiency in Complementary-Label Learning with
Label Augmentation [92.4959898591397]
We analyze the implicit sharing of complementary labels on nearby instances during training.
We propose a novel technique that enhances the sharing efficiency via complementary-label augmentation.
Our results confirm that complementary-label augmentation can systematically improve empirical performance over state-of-the-art CLL models.
arXiv Detail & Related papers (2023-05-15T04:43:14Z) - Adversarial Auto-Augment with Label Preservation: A Representation
Learning Principle Guided Approach [95.74102207187545]
We show that a prior-free autonomous data augmentation's objective can be derived from a representation learning principle.
We then propose a practical surrogate to the objective that can be efficiently optimized and integrated seamlessly into existing methods.
arXiv Detail & Related papers (2022-11-02T02:02:51Z) - Boosting the Efficiency of Parametric Detection with Hierarchical Neural
Networks [4.1410005218338695]
We propose Hierarchical Detection Network (HDN), a novel approach to efficient detection.
The network is trained using a novel loss function, which encodes simultaneously the goals of statistical accuracy and efficiency.
We show how training a three-layer HDN using two-layer model can further boost both accuracy and efficiency.
arXiv Detail & Related papers (2022-07-23T19:23:00Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z) - Interpolation-based Contrastive Learning for Few-Label Semi-Supervised
Learning [43.51182049644767]
Semi-supervised learning (SSL) has long been proved to be an effective technique to construct powerful models with limited labels.
Regularization-based methods which force the perturbed samples to have similar predictions with the original ones have attracted much attention.
We propose a novel contrastive loss to guide the embedding of the learned network to change linearly between samples.
arXiv Detail & Related papers (2022-02-24T06:00:05Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Improving Auto-Augment via Augmentation-Wise Weight Sharing [123.71986174280741]
A key component of automatic augmentation search is the evaluation process for a particular augmentation policy.
In this paper, we dive into the dynamics of augmented training of the model.
We design a powerful and efficient proxy task based on the Augmentation-Wise Weight Sharing (AWS) to form a fast yet accurate evaluation process.
arXiv Detail & Related papers (2020-09-30T15:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.