Anchor Data Augmentation
- URL: http://arxiv.org/abs/2311.06965v2
- Date: Mon, 27 Nov 2023 19:22:27 GMT
- Title: Anchor Data Augmentation
- Authors: Nora Schneider, Shirin Goshtasbpour, Fernando Perez-Cruz
- Abstract summary: We propose a novel algorithm for data augmentation in nonlinear over-parametrized regression.
Our data augmentation algorithm borrows from the literature on causality and extends the recently proposed Anchor regression (AR) method for data augmentation.
- Score: 53.39044919864444
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We propose a novel algorithm for data augmentation in nonlinear
over-parametrized regression. Our data augmentation algorithm borrows from the
literature on causality and extends the recently proposed Anchor regression
(AR) method for data augmentation, which is in contrast to the current
state-of-the-art domain-agnostic solutions that rely on the Mixup literature.
Our Anchor Data Augmentation (ADA) uses several replicas of the modified
samples in AR to provide more training examples, leading to more robust
regression predictions. We apply ADA to linear and nonlinear regression
problems using neural networks. ADA is competitive with state-of-the-art
C-Mixup solutions.
Related papers
- Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - Out of the Ordinary: Spectrally Adapting Regression for Covariate Shift [12.770658031721435]
We propose a method for adapting the weights of the last layer of a pre-trained neural regression model to perform better on input data originating from a different distribution.
We demonstrate how this lightweight spectral adaptation procedure can improve out-of-distribution performance for synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-29T04:15:58Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Angular upsampling in diffusion MRI using contextual HemiHex
sub-sampling in q-space [0.0]
It is important to incorporate relevant context for the data to ensure that maximum prior information is provided for the AI model to infer the posterior.
In this paper, we introduce HemiHex subsampling to suggestively address training data sampling on q-space geometry.
Our proposed approach is a geometrically optimized regression technique which infers the unknown q-space thus addressing the limitations in the earlier studies.
arXiv Detail & Related papers (2022-11-01T03:13:07Z) - Shuffled linear regression through graduated convex relaxation [12.614901374282868]
The shuffled linear regression problem aims to recover linear relationships in datasets where the correspondence between input and output is unknown.
This problem arises in a wide range of applications including survey data.
We propose a novel optimization algorithm for shuffled linear regression based on a posterior-maximizing objective function.
arXiv Detail & Related papers (2022-09-30T17:33:48Z) - A flexible empirical Bayes approach to multiple linear regression and connections with penalized regression [8.663322701649454]
We introduce a new empirical Bayes approach for large-scale multiple linear regression.
Our approach combines two key ideas: the use of flexible "adaptive shrinkage" priors and variational approximations.
We show that the posterior mean from our method solves a penalized regression problem.
arXiv Detail & Related papers (2022-08-23T12:42:57Z) - Sample Efficiency of Data Augmentation Consistency Regularization [44.19833682906076]
We first present a simple and novel analysis for linear regression, demonstrating that data augmentation consistency (DAC) is intrinsically more efficient than empirical risk minimization on augmented data (DA-ERM)
We then propose a new theoretical framework for analyzing DAC, which reframes DAC as a way to reduce function class complexity.
arXiv Detail & Related papers (2022-02-24T17:50:31Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - A Hypergradient Approach to Robust Regression without Correspondence [85.49775273716503]
We consider a variant of regression problem, where the correspondence between input and output data is not available.
Most existing methods are only applicable when the sample size is small.
We propose a new computational framework -- ROBOT -- for the shuffled regression problem.
arXiv Detail & Related papers (2020-11-30T21:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.