Distributed randomized Kaczmarz for the adversarial workers
- URL: http://arxiv.org/abs/2203.00095v1
- Date: Mon, 28 Feb 2022 21:10:43 GMT
- Title: Distributed randomized Kaczmarz for the adversarial workers
- Authors: Xia Li, Longxiu Huang, Deanna Needell
- Abstract summary: We propose an iterative approach that is adversary-tolerant for least-squares problems.
The efficiency of the proposed method is shown in simulations in the presence of adversaries.
- Score: 12.372713404289264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing large-scale distributed methods that are robust to the presence of
adversarial or corrupted workers is an important part of making such methods
practical for real-world problems. Here, we propose an iterative approach that
is adversary-tolerant for least-squares problems. The algorithm utilizes simple
statistics to guarantee convergence and is capable of learning the adversarial
distributions. Additionally, the efficiency of the proposed method is shown in
simulations in the presence of adversaries. The results demonstrate the great
capability of such methods to tolerate different levels of adversary rates and
to identify the erroneous workers with high accuracy.
Related papers
- Sequential Manipulation Against Rank Aggregation: Theory and Algorithm [119.57122943187086]
We leverage an online attack on the vulnerable data collection process.
From the game-theoretic perspective, the confrontation scenario is formulated as a distributionally robust game.
The proposed method manipulates the results of rank aggregation methods in a sequential manner.
arXiv Detail & Related papers (2024-07-02T03:31:21Z) - Adversarial Training Should Be Cast as a Non-Zero-Sum Game [121.95628660889628]
Two-player zero-sum paradigm of adversarial training has not engendered sufficient levels of robustness.
We show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on robustness.
A novel non-zero-sum bilevel formulation of adversarial training yields a framework that matches and in some cases outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-06-19T16:00:48Z) - Randomized Kaczmarz in Adversarial Distributed Setting [15.23454580321625]
We propose an iterative approach that is adversary-tolerant for convex optimization problems.
Our method ensures convergence and is capable of adapting to adversarial distributions.
arXiv Detail & Related papers (2023-02-24T01:26:56Z) - Distributionally Robust Learning with Stable Adversarial Training [34.74504615726101]
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts.
We propose a novel Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set.
arXiv Detail & Related papers (2021-06-30T03:05:45Z) - Distributionally Robust Learning in Heterogeneous Contexts [29.60681287631439]
We consider the problem of learning from training data obtained in different contexts, where the test data is subject to distributional shifts.
We develop a distributionally robust method that focuses on excess risks and achieves a more appropriate trade-off between performance and robustness than the conventional and overly conservative minimax approach.
arXiv Detail & Related papers (2021-05-18T14:00:34Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Accounting for Model Uncertainty in Algorithmic Discrimination [16.654676310264705]
We argue that the fairness approaches should instead focus only on equalizing errors arising due to model uncertainty.
We draw a connection between predictive multiplicity and model uncertainty and argue that the techniques from predictive multiplicity could be used to identify errors made due to model uncertainty.
arXiv Detail & Related papers (2021-05-10T10:34:12Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - Stable Adversarial Learning under Distributional Shifts [46.98655899839784]
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts.
We propose Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set.
arXiv Detail & Related papers (2020-06-08T08:42:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.