Adaptive Noisy Data Augmentation for Regularized Estimation and
Inference in Generalized Linear Models
- URL: http://arxiv.org/abs/2204.08574v1
- Date: Mon, 18 Apr 2022 22:02:37 GMT
- Title: Adaptive Noisy Data Augmentation for Regularized Estimation and
Inference in Generalized Linear Models
- Authors: Yinan Li and Fang Liu
- Abstract summary: We propose the AdaPtive Noise Augmentation (PANDA) procedure to regularize the estimation and inference of generalized linear models (GLMs)
We demonstrate the superior or similar performance of PANDA against the existing approaches of the same type of regularizers in simulated and real-life data.
- Score: 15.817569026827451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose the AdaPtive Noise Augmentation (PANDA) procedure to regularize
the estimation and inference of generalized linear models (GLMs). PANDA
iteratively optimizes the objective function given noise augmented data until
convergence to obtain the regularized model estimates. The augmented noises are
designed to achieve various regularization effects, including $l_0$, bridge
(lasso and ridge included), elastic net, adaptive lasso, and SCAD, as well as
group lasso and fused ridge. We examine the tail bound of the noise-augmented
loss function and establish the almost sure convergence of the noise-augmented
loss function and its minimizer to the expected penalized loss function and its
minimizer, respectively. We derive the asymptotic distributions for the
regularized parameters, based on which, inferences can be obtained
simultaneously with variable selection. PANDA exhibits ensemble learning
behaviors that help further decrease the generalization error. Computationally,
PANDA is easy to code, leveraging existing software for implementing GLMs,
without resorting to complicated optimization techniques. We demonstrate the
superior or similar performance of PANDA against the existing approaches of the
same type of regularizers in simulated and real-life data. We show that the
inferences through PANDA achieve nominal or near-nominal coverage and are far
more efficient compared to a popular existing post-selection procedure.
Related papers
- Robust Gaussian Processes via Relevance Pursuit [17.39376866275623]
We propose and study a GP model that achieves robustness against sparse outliers by inferring data-point-specific noise levels.
We show, surprisingly, that the model can be parameterized such that the associated log marginal likelihood is strongly concave in the data-point-specific noise variances.
arXiv Detail & Related papers (2024-10-31T17:59:56Z) - On the Implicit Relation Between Low-Rank Adaptation and Differential Privacy [5.359060261460183]
Low-rank task adaptation of language models has been proposed, e.g., LoRA and FLoRA.
We look at low-rank adaptation from the lens of data privacy.
Unlike other existing fine-tuning algorithms, low-rank adaptation provides privacy w.r.t the fine-tuning data implicitly.
arXiv Detail & Related papers (2024-09-26T04:56:49Z) - Robust Learning under Hybrid Noise [24.36707245704713]
We propose a novel unified learning framework called "Feature and Label Recovery" (FLR) to combat the hybrid noise from the perspective of data recovery.
arXiv Detail & Related papers (2024-07-04T16:13:25Z) - A Corrected Expected Improvement Acquisition Function Under Noisy
Observations [22.63212972670109]
Sequential of expected improvement (EI) is one of the most widely used policies in Bayesian optimization.
The uncertainty associated with the incumbent solution is often neglected in many analytic EI-type methods.
We propose a modification of EI that corrects its closed-form expression by incorporating the covariance information provided by the Gaussian Process (GP) model.
arXiv Detail & Related papers (2023-10-08T13:50:39Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Differentially Private Learning with Per-Sample Adaptive Clipping [8.401653565794353]
We propose a Differentially Private Per-Sample Adaptive Clipping (DP-PSAC) algorithm based on a non-monotonic adaptive weight function.
We show that DP-PSAC outperforms or matches the state-of-the-art methods on multiple main-stream vision and language tasks.
arXiv Detail & Related papers (2022-12-01T07:26:49Z) - Post-Processing Temporal Action Detection [134.26292288193298]
Temporal Action Detection (TAD) methods typically take a pre-processing step in converting an input varying-length video into a fixed-length snippet representation sequence.
This pre-processing step would temporally downsample the video, reducing the inference resolution and hampering the detection performance in the original temporal resolution.
We introduce a novel model-agnostic post-processing method without model redesign and retraining.
arXiv Detail & Related papers (2022-11-27T19:50:37Z) - Fine-grained Retrieval Prompt Tuning [149.9071858259279]
Fine-grained Retrieval Prompt Tuning steers a frozen pre-trained model to perform the fine-grained retrieval task from the perspectives of sample prompt and feature adaptation.
Our FRPT with fewer learnable parameters achieves the state-of-the-art performance on three widely-used fine-grained datasets.
arXiv Detail & Related papers (2022-07-29T04:10:04Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.