RealPatch: A Statistical Matching Framework for Model Patching with Real
Samples
- URL: http://arxiv.org/abs/2208.02192v1
- Date: Wed, 3 Aug 2022 16:22:30 GMT
- Title: RealPatch: A Statistical Matching Framework for Model Patching with Real
Samples
- Authors: Sara Romiti, Christopher Inskip, Viktoriia Sharmanska, Novi Quadrianto
- Abstract summary: RealPatch is a framework for simpler, faster, and more data-efficient data augmentation based on statistical matching.
We show that RealPatch can successfully eliminate dataset leakage while reducing model leakage and maintaining high utility.
- Score: 6.245453620070586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning classifiers are typically trained to minimise the average
error across a dataset. Unfortunately, in practice, this process often exploits
spurious correlations caused by subgroup imbalance within the training data,
resulting in high average performance but highly variable performance across
subgroups. Recent work to address this problem proposes model patching with
CAMEL. This previous approach uses generative adversarial networks to perform
intra-class inter-subgroup data augmentations, requiring (a) the training of a
number of computationally expensive models and (b) sufficient quality of
model's synthetic outputs for the given domain. In this work, we propose
RealPatch, a framework for simpler, faster, and more data-efficient data
augmentation based on statistical matching. Our framework performs model
patching by augmenting a dataset with real samples, mitigating the need to
train generative models for the target task. We demonstrate the effectiveness
of RealPatch on three benchmark datasets, CelebA, Waterbirds and a subset of
iWildCam, showing improvements in worst-case subgroup performance and in
subgroup performance gap in binary classification. Furthermore, we conduct
experiments with the imSitu dataset with 211 classes, a setting where
generative model-based patching such as CAMEL is impractical. We show that
RealPatch can successfully eliminate dataset leakage while reducing model
leakage and maintaining high utility. The code for RealPatch can be found at
https://github.com/wearepal/RealPatch.
Related papers
- Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - Feedback-guided Data Synthesis for Imbalanced Classification [10.836265321046561]
We introduce a framework for augmenting static datasets with useful synthetic samples.
We find that the samples must be close to the support of the real data of the task at hand, and be sufficiently diverse.
On ImageNet-LT, we achieve state-of-the-art results, with over 4 percent improvement on underrepresented classes.
arXiv Detail & Related papers (2023-09-29T21:47:57Z) - SPRINT: A Unified Toolkit for Evaluating and Demystifying Zero-shot
Neural Sparse Retrieval [92.27387459751309]
We provide SPRINT, a unified Python toolkit for evaluating neural sparse retrieval.
We establish strong and reproducible zero-shot sparse retrieval baselines across the well-acknowledged benchmark, BEIR.
We show that SPLADEv2 produces sparse representations with a majority of tokens outside of the original query and document.
arXiv Detail & Related papers (2023-07-19T22:48:02Z) - Single-Stage Visual Relationship Learning using Conditional Queries [60.90880759475021]
TraCQ is a new formulation for scene graph generation that avoids the multi-task learning problem and the entity pair distribution.
We employ a DETR-based encoder-decoder conditional queries to significantly reduce the entity label space as well.
Experimental results show that TraCQ not only outperforms existing single-stage scene graph generation methods, it also beats many state-of-the-art two-stage methods on the Visual Genome dataset.
arXiv Detail & Related papers (2023-06-09T06:02:01Z) - Impact of PolSAR pre-processing and balancing methods on complex-valued
neural networks segmentation tasks [9.6556424340252]
We investigate the semantic segmentation of Polarimetric Synthetic Aperture Radar (PolSAR) using Complex-Valued Neural Network (CVNN)
We exhaustively compare both methods for six model architectures, three complex-valued, and their respective real-equivalent models.
We propose two methods for reducing this gap and performing the results for all input representations, models, and dataset pre-processing.
arXiv Detail & Related papers (2022-10-28T12:49:43Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - Machine Learning Techniques to Construct Patched Analog Ensembles for
Data Assimilation [0.0]
We study general and variational autoencoders for the machine learning component of cAnEnOI.
We propose using patching schemes to divide the global spatial domain into digestible chunks.
Testing this new algorithm on a 1D toy model, we find that larger patch sizes make it harder to train an accurate generative model.
arXiv Detail & Related papers (2021-02-27T20:47:27Z) - Model Patching: Closing the Subgroup Performance Gap with Data
Augmentation [50.35010342284508]
We introduce model patching, a framework for improving robustness of machine learning models.
Model patching encourages the model to be invariant to subgroup differences, and focus on class information shared by subgroups.
We instantiate model patching with CAMEL, which (1) uses a CycleGAN to learn the intra-class, inter-subgroup augmentations, and (2) balances subgroup performance using a theoretically-motivated consistency regularizer.
We demonstrate CAMEL's effectiveness on 3 benchmark datasets, with reductions in robust error up to 33% relative to the best baseline.
arXiv Detail & Related papers (2020-08-15T20:01:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.