IPF-RDA: An Information-Preserving Framework for Robust Data Augmentation
- URL: http://arxiv.org/abs/2509.16678v1
- Date: Sat, 20 Sep 2025 13:06:45 GMT
- Title: IPF-RDA: An Information-Preserving Framework for Robust Data Augmentation
- Authors: Suorong Yang, Hongchao Yang, Suhan Guo, Furao Shen, Jian Zhao,
- Abstract summary: We propose a novel information-preserving framework, namely IPF-RDA, to enhance the robustness of data augmentations.<n>IPF-RDA combines the proposal of (i) a new class-discriminative information estimation algorithm that identifies the points most vulnerable to data augmentation operations.<n>We show that IPF-RDA consistently improves the performance of numerous commonly used state-of-the-art data augmentation methods.
- Score: 14.441315866302382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation is widely utilized as an effective technique to enhance the generalization performance of deep models. However, data augmentation may inevitably introduce distribution shifts and noises, which significantly constrain the potential and deteriorate the performance of deep networks. To this end, we propose a novel information-preserving framework, namely IPF-RDA, to enhance the robustness of data augmentations in this paper. IPF-RDA combines the proposal of (i) a new class-discriminative information estimation algorithm that identifies the points most vulnerable to data augmentation operations and corresponding importance scores; And (ii) a new information-preserving scheme that preserves the critical information in the augmented samples and ensures the diversity of augmented data adaptively. We divide data augmentation methods into three categories according to the operation types and integrate these approaches into our framework accordingly. After being integrated into our framework, the robustness of data augmentation methods can be enhanced and their full potential can be unleashed. Extensive experiments demonstrate that although being simple, IPF-RDA consistently improves the performance of numerous commonly used state-of-the-art data augmentation methods with popular deep models on a variety of datasets, including CIFAR-10, CIFAR-100, Tiny-ImageNet, CUHK03, Market1501, Oxford Flower, and MNIST, where its performance and scalability are stressed. The implementation is available at https://github.com/Jackbrocp/IPF-RDA.
Related papers
- AugmentGest: Can Random Data Cropping Augmentation Boost Gesture Recognition Performance? [49.64902130083662]
This paper proposes a comprehensive data augmentation framework that integrates geometric transformations, random variations, rotation, zooming and intensity-based transformations.<n>The proposed augmentation strategy is evaluated on three models: multi-stream e2eET, FPPR point cloud-based hand gesture recognition (HGR), and DD-Network.
arXiv Detail & Related papers (2025-06-08T16:43:05Z) - Effective Dual-Region Augmentation for Reduced Reliance on Large Amounts of Labeled Data [1.0901840476380924]
This paper introduces a novel dual-region augmentation approach designed to reduce reliance on large-scale labeled datasets.<n>Our method performs targeted data transformations by applying random noise perturbations to foreground objects.<n>By augmenting training data through structured transformations, our method enables model generalization across domains.
arXiv Detail & Related papers (2025-04-17T16:42:33Z) - AdaAugment: A Tuning-Free and Adaptive Approach to Enhance Data Augmentation [12.697608744311122]
AdaAugment is a tuning-free adaptive augmentation method for deep models.<n>It adapts augmentation magnitudes based on real-time feedback from the target network.<n>Experiments show it consistently outperforms other state-of-the-art DA methods.
arXiv Detail & Related papers (2024-05-19T06:54:03Z) - ADLDA: A Method to Reduce the Harm of Data Distribution Shift in Data Augmentation [11.887799310374174]
This study introduces a novel data augmentation technique, ADLDA, aimed at mitigating the negative impact of data distribution shifts.
Experimental results demonstrate that ADLDA significantly enhances model performance across multiple datasets.
arXiv Detail & Related papers (2024-05-11T03:20:35Z) - Distribution-Aware Data Expansion with Diffusion Models [55.979857976023695]
We propose DistDiff, a training-free data expansion framework based on the distribution-aware diffusion model.
DistDiff consistently enhances accuracy across a diverse range of datasets compared to models trained solely on original data.
arXiv Detail & Related papers (2024-03-11T14:07:53Z) - Local Magnification for Data and Feature Augmentation [53.04028225837681]
We propose an easy-to-implement and model-free data augmentation method called Local Magnification (LOMA)
LOMA generates additional training data by randomly magnifying a local area of the image.
Experiments show that our proposed LOMA, though straightforward, can be combined with standard data augmentation to significantly improve the performance on image classification and object detection.
arXiv Detail & Related papers (2022-11-15T02:51:59Z) - Mitigating Data Heterogeneity in Federated Learning with Data
Augmentation [26.226057709504733]
Federated Learning (FL) is a framework that enables training a centralized model while securing user privacy by fusing local, decentralized models.
One major obstacle is data heterogeneity, i.e., each client having non-identically and independently distributed (non-IID) data.
Recent evidence suggests that data augmentation can induce equal or greater performance.
arXiv Detail & Related papers (2022-06-20T19:47:43Z) - EPiDA: An Easy Plug-in Data Augmentation Framework for High Performance
Text Classification [34.15923302216751]
We present an easy and plug-in data augmentation framework EPiDA to support effective text classification.
EPiDA employs two mechanisms: relative entropy (REM) and conditional minimization entropy (CEM) to control data generation.
EPiDA can support efficient and continuous data generation for effective classification training.
arXiv Detail & Related papers (2022-04-24T06:53:48Z) - Virtual Data Augmentation: A Robust and General Framework for
Fine-tuning Pre-trained Models [51.46732511844122]
Powerful pre-trained language models (PLM) can be fooled by small perturbations or intentional attacks.
We present Virtual Data Augmentation (VDA), a general framework for robustly fine-tuning PLMs.
Our approach is able to improve the robustness of PLMs and alleviate the performance degradation under adversarial attacks.
arXiv Detail & Related papers (2021-09-13T09:15:28Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for
Natural Language Understanding [67.61357003974153]
We propose a novel data augmentation framework dubbed CoDA.
CoDA synthesizes diverse and informative augmented examples by integrating multiple transformations organically.
A contrastive regularization objective is introduced to capture the global relationship among all the data samples.
arXiv Detail & Related papers (2020-10-16T23:57:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.