Stable Adversarial Learning under Distributional Shifts
- URL: http://arxiv.org/abs/2006.04414v2
- Date: Tue, 11 May 2021 02:27:32 GMT
- Title: Stable Adversarial Learning under Distributional Shifts
- Authors: Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li,
Yishi Lin
- Abstract summary: Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts.
We propose Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set.
- Score: 46.98655899839784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning algorithms with empirical risk minimization are vulnerable
under distributional shifts due to the greedy adoption of all the correlations
found in training data. Recently, there are robust learning methods aiming at
this problem by minimizing the worst-case risk over an uncertainty set.
However, they equally treat all covariates to form the decision sets regardless
of the stability of their correlations with the target, resulting in the
overwhelmingly large set and low confidence of the learner.In this paper, we
propose Stable Adversarial Learning (SAL) algorithm that leverages
heterogeneous data sources to construct a more practical uncertainty set and
conduct differentiated robustness optimization, where covariates are
differentiated according to the stability of their correlations with the
target. We theoretically show that our method is tractable for stochastic
gradient-based optimization and provide the performance guarantees for our
method. Empirical studies on both simulation and real datasets validate the
effectiveness of our method in terms of uniformly good performance across
unknown distributional shifts.
Related papers
- The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Distributed Distributionally Robust Optimization with Non-Convex
Objectives [24.64654924173679]
Asynchronous distributed algorithm named Asynchronous Single-looP alternatIve gRadient projEction is proposed.
New uncertainty set, i.e., constrained D-norm uncertainty set, is developed to leverage the prior distribution and flexibly control the degree of robustness.
empirical studies on real-world datasets demonstrate that the proposed method can not only achieve fast convergence, but also remain robust against data as well as malicious attacks.
arXiv Detail & Related papers (2022-10-14T07:39:13Z) - Distributionally Robust Learning with Stable Adversarial Training [34.74504615726101]
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts.
We propose a novel Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set.
arXiv Detail & Related papers (2021-06-30T03:05:45Z) - Distributionally Robust Learning in Heterogeneous Contexts [29.60681287631439]
We consider the problem of learning from training data obtained in different contexts, where the test data is subject to distributional shifts.
We develop a distributionally robust method that focuses on excess risks and achieves a more appropriate trade-off between performance and robustness than the conventional and overly conservative minimax approach.
arXiv Detail & Related papers (2021-05-18T14:00:34Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - A Stochastic Subgradient Method for Distributionally Robust Non-Convex
Learning [2.007262412327553]
robustness is with respect to uncertainty in the underlying data distribution.
We show that our technique converges to satisfying perturbationity conditions.
We also illustrate the performance of our algorithm on real datasets.
arXiv Detail & Related papers (2020-06-08T18:52:40Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.