Adversarial Regression with Doubly Non-negative Weighting Matrices
- URL: http://arxiv.org/abs/2109.14875v1
- Date: Thu, 30 Sep 2021 06:41:41 GMT
- Title: Adversarial Regression with Doubly Non-negative Weighting Matrices
- Authors: Tam Le and Truyen Nguyen and Makoto Yamada and Jose Blanchet and Viet
Anh Nguyen
- Abstract summary: We propose a novel and coherent scheme for kernel-reweighted regression by reparametrizing the sample weights using a doubly non-negative matrix.
Numerical experiments show that our reweighting strategy delivers promising results on numerous datasets.
- Score: 25.733130388781476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many machine learning tasks that involve predicting an output response can be
solved by training a weighted regression model. Unfortunately, the predictive
power of this type of models may severely deteriorate under low sample sizes or
under covariate perturbations. Reweighting the training samples has aroused as
an effective mitigation strategy to these problems. In this paper, we propose a
novel and coherent scheme for kernel-reweighted regression by reparametrizing
the sample weights using a doubly non-negative matrix. When the weighting
matrix is confined in an uncertainty set using either the log-determinant
divergence or the Bures-Wasserstein distance, we show that the adversarially
reweighted estimate can be solved efficiently using first-order methods.
Numerical experiments show that our reweighting strategy delivers promising
results on numerous datasets.
Related papers
- Distributed High-Dimensional Quantile Regression: Estimation Efficiency and Support Recovery [0.0]
We focus on distributed estimation and support recovery for high-dimensional linear quantile regression.
We transform the original quantile regression into the least-squares optimization.
An efficient algorithm is developed, which enjoys high computation and communication efficiency.
arXiv Detail & Related papers (2024-05-13T08:32:22Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Errors-in-variables Fr\'echet Regression with Low-rank Covariate
Approximation [2.1756081703276]
Fr'echet regression has emerged as a promising approach for regression analysis involving non-Euclidean response variables.
Our proposed framework combines the concepts of global Fr'echet regression and principal component regression, aiming to improve the efficiency and accuracy of the regression estimator.
arXiv Detail & Related papers (2023-05-16T08:37:54Z) - Augmented balancing weights as linear regression [3.877356414450364]
We provide a novel characterization of augmented balancing weights, also known as automatic debiased machine learning (AutoDML)
We show that the augmented estimator is equivalent to a single linear model with coefficients that combine the coefficients from the original outcome model and coefficients from an unpenalized ordinary least squares (OLS) fit on the same data.
Our framework opens the black box on this increasingly popular class of estimators.
arXiv Detail & Related papers (2023-04-27T21:53:54Z) - Vector-Valued Least-Squares Regression under Output Regularity
Assumptions [73.99064151691597]
We propose and analyse a reduced-rank method for solving least-squares regression problems with infinite dimensional output.
We derive learning bounds for our method, and study under which setting statistical performance is improved in comparison to full-rank method.
arXiv Detail & Related papers (2022-11-16T15:07:00Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - Coordinated Double Machine Learning [8.808993671472349]
This paper argues that a carefully coordinated learning algorithm for deep neural networks may reduce the estimation bias.
The improved empirical performance of the proposed method is demonstrated through numerical experiments on both simulated and real data.
arXiv Detail & Related papers (2022-06-02T05:56:21Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - Learning to Reweight Imaginary Transitions for Model-Based Reinforcement
Learning [58.66067369294337]
When the model is inaccurate or biased, imaginary trajectories may be deleterious for training the action-value and policy functions.
We adaptively reweight the imaginary transitions, so as to reduce the negative effects of poorly generated trajectories.
Our method outperforms state-of-the-art model-based and model-free RL algorithms on multiple tasks.
arXiv Detail & Related papers (2021-04-09T03:13:35Z) - Why resampling outperforms reweighting for correcting sampling bias with
stochastic gradients [10.860844636412862]
Training machine learning models on biased data sets requires correction techniques to compensate for the bias.
We consider two commonly-used techniques, resampling and reweighting, that rebalance the proportions of the subgroups to maintain the desired objective function.
arXiv Detail & Related papers (2020-09-28T16:12:38Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.