Enhancing Sample Utilization through Sample Adaptive Augmentation in
Semi-Supervised Learning
- URL: http://arxiv.org/abs/2309.03598v1
- Date: Thu, 7 Sep 2023 09:50:45 GMT
- Title: Enhancing Sample Utilization through Sample Adaptive Augmentation in
Semi-Supervised Learning
- Authors: Guan Gui, Zhen Zhao, Lei Qi, Luping Zhou, Lei Wang, Yinghuan Shi
- Abstract summary: In semi-supervised learning, unlabeled samples can be utilized through augmentation and consistency regularization.
Existing SSL models overlook the characteristics of naive samples, and they just apply the same learning strategy to all samples.
We propose Sample adaptive augmentation (SAA) to give attention to naive samples and augmenting them in a more diverse manner.
- Score: 47.677929366323596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In semi-supervised learning, unlabeled samples can be utilized through
augmentation and consistency regularization. However, we observed certain
samples, even undergoing strong augmentation, are still correctly classified
with high confidence, resulting in a loss close to zero. It indicates that
these samples have been already learned well and do not provide any additional
optimization benefits to the model. We refer to these samples as ``naive
samples". Unfortunately, existing SSL models overlook the characteristics of
naive samples, and they just apply the same learning strategy to all samples.
To further optimize the SSL model, we emphasize the importance of giving
attention to naive samples and augmenting them in a more diverse manner. Sample
adaptive augmentation (SAA) is proposed for this stated purpose and consists of
two modules: 1) sample selection module; 2) sample augmentation module.
Specifically, the sample selection module picks out {naive samples} based on
historical training information at each epoch, then the naive samples will be
augmented in a more diverse manner in the sample augmentation module. Thanks to
the extreme ease of implementation of the above modules, SAA is advantageous
for being simple and lightweight. We add SAA on top of FixMatch and FlexMatch
respectively, and experiments demonstrate SAA can significantly improve the
models. For example, SAA helped improve the accuracy of FixMatch from 92.50% to
94.76% and that of FlexMatch from 95.01% to 95.31% on CIFAR-10 with 40 labels.
Related papers
- Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation [51.127054971591924]
We introduce a new generative self-evaluation scheme designed to adaptively reduce the number of generated samples.
We demonstrate that 74% of the improvement from using 16 samples can be achieved with only 1.2 samples on average.
arXiv Detail & Related papers (2024-10-03T17:47:29Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - FSL-Rectifier: Rectify Outliers in Few-Shot Learning via Test-Time Augmentation [7.477118370563593]
Few-shot-learning (FSL) commonly requires a model to identify images (queries) that belong to classes unseen during training.
We generate additional test-class samples by combining original samples with suitable train-class samples via a generative image combiner.
We obtain averaged features via an augmentor, which leads to more typical representations through the averaging.
arXiv Detail & Related papers (2024-02-28T12:37:30Z) - Rethinking Samples Selection for Contrastive Learning: Mining of
Potential Samples [5.586563813796839]
Contrastive learning predicts whether two images belong to the same category by training a model to make their feature representations as close or as far away as possible.
We take into account both positive and negative samples, and mining potential samples from two aspects.
Our method achieves 88.57%, 61.10%, and 36.69% top-1 accuracy on CIFAR10, CIFAR100, and TinyImagenet, respectively.
arXiv Detail & Related papers (2023-11-01T08:08:06Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - ReSmooth: Detecting and Utilizing OOD Samples when Training with Data
Augmentation [57.38418881020046]
Recent DA techniques always meet the need for diversity in augmented training samples.
An augmentation strategy that has a high diversity usually introduces out-of-distribution (OOD) augmented samples.
We propose ReSmooth, a framework that firstly detects OOD samples in augmented samples and then leverages them.
arXiv Detail & Related papers (2022-05-25T09:29:27Z) - Learning Fast Samplers for Diffusion Models by Differentiating Through
Sample Quality [44.37533757879762]
We introduce Differentiable Diffusion Sampler Search (DDSS), a method that optimize fast samplers for any pre-trained diffusion model.
We also present Generalized Gaussian Diffusion Models (GGDM), a family of flexible non-Markovian samplers for diffusion models.
Our method is compatible with any pre-trained diffusion model without fine-tuning or re-training required.
arXiv Detail & Related papers (2022-02-11T18:53:18Z) - Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated
Label Mixing [104.630875328668]
Mixup scheme suggests mixing a pair of samples to create an augmented training sample.
We present a novel, yet simple Mixup-variant that captures the best of both worlds.
arXiv Detail & Related papers (2021-12-16T11:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.