Wasserstein Distributionally Robust Nonparametric Regression
- URL: http://arxiv.org/abs/2505.07967v1
- Date: Mon, 12 May 2025 18:07:37 GMT
- Title: Wasserstein Distributionally Robust Nonparametric Regression
- Authors: Changyu Liu, Yuling Jiao, Junhui Wang, Jian Huang,
- Abstract summary: This paper studies the generalization properties of Wasserstein distributionally robust nonparametric estimators.<n>We establish non-asymptotic error bounds for the excess local worst-case risk.<n>The robustness of the proposed estimator is evaluated through simulation studies and illustrated with an application to the MNIST dataset.
- Score: 9.65010022854885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Distributionally robust optimization has become a powerful tool for prediction and decision-making under model uncertainty. By focusing on the local worst-case risk, it enhances robustness by identifying the most unfavorable distribution within a predefined ambiguity set. While extensive research has been conducted in parametric settings, studies on nonparametric frameworks remain limited. This paper studies the generalization properties of Wasserstein distributionally robust nonparametric estimators, with particular attention to the impact of model misspecification, where non-negligible discrepancies between the estimation function space and target function can impair generalization performance. We establish non-asymptotic error bounds for the excess local worst-case risk by analyzing the regularization effects induced by distributional perturbations and employing feedforward neural networks with Lipschitz constraints. These bounds illustrate how uncertainty levels and neural network structures influence generalization performance and are applicable to both Lipschitz and quadratic loss functions. Furthermore, we investigate the Lagrangian relaxation of the local worst-case risk and derive corresponding non-asymptotic error bounds for these estimators. The robustness of the proposed estimator is evaluated through simulation studies and illustrated with an application to the MNIST dataset.
Related papers
- Debiased Ill-Posed Regression [8.495265117285223]
We propose a debiased estimation strategy based on the influence function of a modification of the projected error.<n>Our proposed estimator possesses a second-order bias with respect to the involved nuisance functions.
arXiv Detail & Related papers (2025-05-27T06:47:33Z) - Active operator learning with predictive uncertainty quantification for partial differential equations [6.519088943440059]
We develop a method for uncertainty quantification in deep operator networks (DeepONets) using predictive uncertainty estimates calibrated to model errors observed during training.<n>The uncertainty framework operates using a single network, in contrast to existing ensemble approaches, and introduces minimal overhead during training and inference.<n>We evaluate the uncertainty-equipped models on a series of partial differential equation (PDE) problems, and show that the model predictions are unbiased, non-skewed, and accurately reproduce solutions to the PDEs.
arXiv Detail & Related papers (2025-03-05T04:48:14Z) - Generalized Gaussian Temporal Difference Error for Uncertainty-aware Reinforcement Learning [0.19418036471925312]
We introduce a novel framework for generalized Gaussian error modeling in deep reinforcement learning.<n>We improve the estimation and mitigation of data-dependent aleatoric uncertainty.<n> Experiments with policy gradient algorithms demonstrate significant performance gains.
arXiv Detail & Related papers (2024-08-05T08:12:25Z) - Error Bounds of Supervised Classification from Information-Theoretic Perspective [0.0]
We explore bounds on the expected risk when using deep neural networks for supervised classification from an information theoretic perspective.
We introduce model risk and fitting error, which are derived from further decomposing the empirical risk.
arXiv Detail & Related papers (2024-06-07T01:07:35Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Wasserstein Distributionally Robust Estimation in High Dimensions: Performance Analysis and Optimal Hyperparameter Tuning [2.4578723416255754]
Distributionally robust optimization (DRO) has become a powerful framework for estimation under uncertainty.<n>We propose a DRO-based method for linear regression and address a central question: how to optimally choose the robustness radius.<n>We show that our method achieves the same effect as cross-validation, but at a fraction of the computational cost.
arXiv Detail & Related papers (2022-06-27T13:02:59Z) - Adversarial robustness of sparse local Lipschitz predictors [12.525959293825318]
This work studies the adversarial robustness of parametric functions composed of a linear predictor and a non-linear representation map.
We use sparse local Lipschitzness to better capture the stability and reduced effective dimensionality of predictors upon local perturbations.
arXiv Detail & Related papers (2022-02-26T19:48:07Z) - Improving Generalization via Uncertainty Driven Perturbations [107.45752065285821]
We consider uncertainty-driven perturbations of the training data points.
Unlike loss-driven perturbations, uncertainty-guided perturbations do not cross the decision boundary.
We show that UDP is guaranteed to achieve the robustness margin decision on linear models.
arXiv Detail & Related papers (2022-02-11T16:22:08Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - On Low-rank Trace Regression under General Sampling Distribution [9.699586426043885]
We show that cross-validated estimators satisfy near-optimal error bounds on general assumptions.
We also show that the cross-validated estimator outperforms the theory-inspired approach of selecting the parameter.
arXiv Detail & Related papers (2019-04-18T02:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.