Distributionally Robust Learning with Stable Adversarial Training
- URL: http://arxiv.org/abs/2106.15791v1
- Date: Wed, 30 Jun 2021 03:05:45 GMT
- Title: Distributionally Robust Learning with Stable Adversarial Training
- Authors: Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li
- Abstract summary: Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts.
We propose a novel Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set.
- Score: 34.74504615726101
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning algorithms with empirical risk minimization are vulnerable
under distributional shifts due to the greedy adoption of all the correlations
found in training data. There is an emerging literature on tackling this
problem by minimizing the worst-case risk over an uncertainty set. However,
existing methods mostly construct ambiguity sets by treating all variables
equally regardless of the stability of their correlations with the target,
resulting in the overwhelmingly-large uncertainty set and low confidence of the
learner. In this paper, we propose a novel Stable Adversarial Learning (SAL)
algorithm that leverages heterogeneous data sources to construct a more
practical uncertainty set and conduct differentiated robustness optimization,
where covariates are differentiated according to the stability of their
correlations with the target. We theoretically show that our method is
tractable for stochastic gradient-based optimization and provide the
performance guarantees for our method. Empirical studies on both simulation and
real datasets validate the effectiveness of our method in terms of uniformly
good performance across unknown distributional shifts.
Related papers
- Uncertainty for Active Learning on Graphs [70.44714133412592]
Uncertainty Sampling is an Active Learning strategy that aims to improve the data efficiency of machine learning models.
We benchmark Uncertainty Sampling beyond predictive uncertainty and highlight a significant performance gap to other Active Learning strategies.
We develop ground-truth Bayesian uncertainty estimates in terms of the data generating process and prove their effectiveness in guiding Uncertainty Sampling toward optimal queries.
arXiv Detail & Related papers (2024-05-02T16:50:47Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Distributed Distributionally Robust Optimization with Non-Convex
Objectives [24.64654924173679]
Asynchronous distributed algorithm named Asynchronous Single-looP alternatIve gRadient projEction is proposed.
New uncertainty set, i.e., constrained D-norm uncertainty set, is developed to leverage the prior distribution and flexibly control the degree of robustness.
empirical studies on real-world datasets demonstrate that the proposed method can not only achieve fast convergence, but also remain robust against data as well as malicious attacks.
arXiv Detail & Related papers (2022-10-14T07:39:13Z) - Distributionally Robust Learning in Heterogeneous Contexts [29.60681287631439]
We consider the problem of learning from training data obtained in different contexts, where the test data is subject to distributional shifts.
We develop a distributionally robust method that focuses on excess risks and achieves a more appropriate trade-off between performance and robustness than the conventional and overly conservative minimax approach.
arXiv Detail & Related papers (2021-05-18T14:00:34Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - A Stochastic Subgradient Method for Distributionally Robust Non-Convex
Learning [2.007262412327553]
robustness is with respect to uncertainty in the underlying data distribution.
We show that our technique converges to satisfying perturbationity conditions.
We also illustrate the performance of our algorithm on real datasets.
arXiv Detail & Related papers (2020-06-08T18:52:40Z) - Stable Adversarial Learning under Distributional Shifts [46.98655899839784]
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts.
We propose Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set.
arXiv Detail & Related papers (2020-06-08T08:42:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.