Randomized Kaczmarz in Adversarial Distributed Setting
- URL: http://arxiv.org/abs/2302.14615v2
- Date: Wed, 13 Mar 2024 17:11:20 GMT
- Title: Randomized Kaczmarz in Adversarial Distributed Setting
- Authors: Longxiu Huang, Xia Li, Deanna Needell
- Abstract summary: We propose an iterative approach that is adversary-tolerant for convex optimization problems.
Our method ensures convergence and is capable of adapting to adversarial distributions.
- Score: 15.23454580321625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing large-scale distributed methods that are robust to the presence of
adversarial or corrupted workers is an important part of making such methods
practical for real-world problems. In this paper, we propose an iterative
approach that is adversary-tolerant for convex optimization problems. By
leveraging simple statistics, our method ensures convergence and is capable of
adapting to adversarial distributions. Additionally, the efficiency of the
proposed methods for solving convex problems is shown in simulations with the
presence of adversaries. Through simulations, we demonstrate the efficiency of
our approach in the presence of adversaries and its ability to identify
adversarial workers with high accuracy and tolerate varying levels of adversary
rates.
Related papers
- Enhancing Adversarial Robustness via Uncertainty-Aware Distributional Adversarial Training [43.766504246864045]
We propose a novel uncertainty-aware distributional adversarial training method.
Our approach achieves state-of-the-art adversarial robustness and maintains natural performance.
arXiv Detail & Related papers (2024-11-05T07:26:24Z) - Sequential Manipulation Against Rank Aggregation: Theory and Algorithm [119.57122943187086]
We leverage an online attack on the vulnerable data collection process.
From the game-theoretic perspective, the confrontation scenario is formulated as a distributionally robust game.
The proposed method manipulates the results of rank aggregation methods in a sequential manner.
arXiv Detail & Related papers (2024-07-02T03:31:21Z) - Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off [7.202931445597171]
Ad adversarial training has been the state-of-the-art approach to defend against adversarial examples (AEs)
It suffers from a robustness-accuracy trade-off, where high robustness is achieved at the cost of clean accuracy.
Our method significantly improves the robustness-accuracy trade-off by learning adversarially invariant representations without sacrificing discriminative ability.
arXiv Detail & Related papers (2024-02-22T15:53:46Z) - Distributed randomized Kaczmarz for the adversarial workers [12.372713404289264]
We propose an iterative approach that is adversary-tolerant for least-squares problems.
The efficiency of the proposed method is shown in simulations in the presence of adversaries.
arXiv Detail & Related papers (2022-02-28T21:10:43Z) - Distributionally Robust Learning with Stable Adversarial Training [34.74504615726101]
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts.
We propose a novel Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set.
arXiv Detail & Related papers (2021-06-30T03:05:45Z) - Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training [106.34722726264522]
A range of adversarial defense techniques have been proposed to mitigate the interference of adversarial noise.
Pre-processing methods may suffer from the robustness degradation effect.
A potential cause of this negative effect is that adversarial training examples are static and independent to the pre-processing model.
We propose a method called Joint Adversarial Training based Pre-processing (JATP) defense.
arXiv Detail & Related papers (2021-06-10T01:45:32Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Stable Adversarial Learning under Distributional Shifts [46.98655899839784]
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts.
We propose Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set.
arXiv Detail & Related papers (2020-06-08T08:42:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.