Federated Learning Robust to Byzantine Attacks: Achieving Zero
Optimality Gap
- URL: http://arxiv.org/abs/2308.10427v1
- Date: Mon, 21 Aug 2023 02:43:38 GMT
- Title: Federated Learning Robust to Byzantine Attacks: Achieving Zero
Optimality Gap
- Authors: Shiyuan Zuo, Rongfei Fan, Han Hu, Ning Zhang, and Shimin Gong
- Abstract summary: We propose a robust aggregation method for federated learning (FL) that can effectively tackle malicious Byzantine attacks.
At each user, model parameter is updated by multiple steps, which is adjustable over iterations, and then pushed to the aggregation center directly.
- Score: 21.50616436951285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a robust aggregation method for federated learning
(FL) that can effectively tackle malicious Byzantine attacks. At each user,
model parameter is firstly updated by multiple steps, which is adjustable over
iterations, and then pushed to the aggregation center directly. This decreases
the number of interactions between the aggregation center and users, allows
each user to set training parameter in a flexible way, and reduces computation
burden compared with existing works that need to combine multiple historical
model parameters. At the aggregation center, geometric median is leveraged to
combine the received model parameters from each user. Rigorous proof shows that
zero optimality gap is achieved by our proposed method with linear convergence,
as long as the fraction of Byzantine attackers is below half. Numerical results
verify the effectiveness of our proposed method.
Related papers
- Adversarial Collaborative Filtering for Free [27.949683060138064]
Collaborative Filtering (CF) has been successfully used to help users discover the items of interest.
Existing methods suffer from noisy data issue, which negatively impacts the quality of recommendation.
We present Sharpness-aware Collaborative Filtering (CF), a simple yet effective method that conducts adversarial training without extra computational cost over the base.
arXiv Detail & Related papers (2023-08-20T19:25:38Z) - Flag Aggregator: Scalable Distributed Training under Failures and
Augmented Losses using Convex Optimization [14.732408788010313]
ML applications increasingly rely on complex deep learning models and large datasets.
To scale computation and data, these models are inevitably trained in a distributed manner in clusters of nodes, and their updates are aggregated before being applied to the model.
With data augmentation added to these settings, there is a critical need for robust and efficient aggregation systems.
We show that our approach significantly enhances the robustness of state-of-the-art Byzantine resilient aggregators.
arXiv Detail & Related papers (2023-02-12T06:38:30Z) - A flexible empirical Bayes approach to multiple linear regression and connections with penalized regression [8.663322701649454]
We introduce a new empirical Bayes approach for large-scale multiple linear regression.
Our approach combines two key ideas: the use of flexible "adaptive shrinkage" priors and variational approximations.
We show that the posterior mean from our method solves a penalized regression problem.
arXiv Detail & Related papers (2022-08-23T12:42:57Z) - Suppressing Poisoning Attacks on Federated Learning for Medical Imaging [4.433842217026879]
We propose a robust aggregation rule called Distance-based Outlier Suppression (DOS) that is resilient to byzantine failures.
The proposed method computes the distance between local parameter updates of different clients and obtains an outlier score for each client.
The resulting outlier scores are converted into normalized weights using a softmax function, and a weighted average of the local parameters is used for updating the global model.
arXiv Detail & Related papers (2022-07-15T00:43:34Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - Distributed Dynamic Safe Screening Algorithms for Sparse Regularization [73.85961005970222]
We propose a new distributed dynamic safe screening (DDSS) method for sparsity regularized models and apply it on shared-memory and distributed-memory architecture respectively.
We prove that the proposed method achieves the linear convergence rate with lower overall complexity and can eliminate almost all the inactive features in a finite number of iterations almost surely.
arXiv Detail & Related papers (2022-04-23T02:45:55Z) - Learning over No-Preferred and Preferred Sequence of Items for Robust
Recommendation (Extended Abstract) [69.50145858681951]
We propose a theoretically supported sequential strategy for training a large-scale Recommender System (RS) over implicit feedback.
We present two variants of this strategy where model parameters are updated using either the momentum method or a gradient-based approach.
arXiv Detail & Related papers (2022-02-26T22:29:43Z) - Personalized Federated Learning via Convex Clustering [72.15857783681658]
We propose a family of algorithms for personalized federated learning with locally convex user costs.
The proposed framework is based on a generalization of convex clustering in which the differences between different users' models are penalized.
arXiv Detail & Related papers (2022-02-01T19:25:31Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Learning over no-Preferred and Preferred Sequence of items for Robust
Recommendation [66.8722561224499]
We propose a theoretically founded sequential strategy for training large-scale Recommender Systems (RS) over implicit feedback.
We present two variants of this strategy where model parameters are updated using either the momentum method or a gradient-based approach.
arXiv Detail & Related papers (2020-12-12T22:10:15Z) - An Efficient Framework for Clustered Federated Learning [26.24231986590374]
We address the problem of federated learning (FL) where users are distributed into clusters.
We propose the Iterative Federated Clustering Algorithm (IFCA)
We show that our algorithm is efficient in non- partitioned problems such as neural networks.
arXiv Detail & Related papers (2020-06-07T08:48:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.