Recursive Euclidean Distance Based Robust Aggregation Technique For
Federated Learning
- URL: http://arxiv.org/abs/2303.11337v1
- Date: Mon, 20 Mar 2023 06:48:43 GMT
- Title: Recursive Euclidean Distance Based Robust Aggregation Technique For
Federated Learning
- Authors: Charuka Herath, Yogachandran Rahulamathavan, Xiaolan Liu
- Abstract summary: Federated learning is a solution to data availability and privacy challenges in machine learning.
Malicious users aim to sabotage the collaborative learning process by training the local model with malicious data.
We propose a novel robust aggregation approach based on Euclidean distance calculation.
- Score: 4.848016645393023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning has gained popularity as a solution to data availability
and privacy challenges in machine learning. However, the aggregation process of
local model updates to obtain a global model in federated learning is
susceptible to malicious attacks, such as backdoor poisoning, label-flipping,
and membership inference. Malicious users aim to sabotage the collaborative
learning process by training the local model with malicious data. In this
paper, we propose a novel robust aggregation approach based on recursive
Euclidean distance calculation. Our approach measures the distance of the local
models from the previous global model and assigns weights accordingly. Local
models far away from the global model are assigned smaller weights to minimize
the data poisoning effect during aggregation. Our experiments demonstrate that
the proposed algorithm outperforms state-of-the-art algorithms by at least
$5\%$ in accuracy while reducing time complexity by less than $55\%$. Our
contribution is significant as it addresses the critical issue of malicious
attacks in federated learning while improving the accuracy of the global model.
Related papers
- Vanishing Variance Problem in Fully Decentralized Neural-Network Systems [0.8212195887472242]
Federated learning and gossip learning are emerging methodologies designed to mitigate data privacy concerns.
Our research introduces a variance-corrected model averaging algorithm.
Our simulation results demonstrate that our approach enables gossip learning to achieve convergence efficiency comparable to that of federated learning.
arXiv Detail & Related papers (2024-04-06T12:49:20Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - BRFL: A Blockchain-based Byzantine-Robust Federated Learning Model [8.19957400564017]
Federated learning, which stores data in distributed nodes and shares only model parameters, has gained significant attention for addressing this concern.
A challenge arises in federated learning due to the Byzantine Attack Problem, where malicious local models can compromise the global model's performance during aggregation.
This article proposes the integration of Byzantine-Robust Federated Learning (BRLF) model that combines federated learning with blockchain technology.
arXiv Detail & Related papers (2023-10-20T10:21:50Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - SphereFed: Hyperspherical Federated Learning [22.81101040608304]
Key challenge is the handling of non-i.i.d. data across multiple clients.
We introduce the Hyperspherical Federated Learning (SphereFed) framework to address the non-i.i.d. issue.
We show that the calibration solution can be computed efficiently and distributedly without direct access of local data.
arXiv Detail & Related papers (2022-07-19T17:13:06Z) - Global Update Guided Federated Learning [11.731231528534035]
Federated learning protects data privacy and security by exchanging models instead of data.
We propose global-update-guided federated learning (FedGG), which introduces a model-cosine loss into local objective functions.
Numerical simulations show that FedGG has a significant improvement on model convergence accuracies and speeds.
arXiv Detail & Related papers (2022-04-08T08:36:26Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - A Personalized Federated Learning Algorithm: an Application in Anomaly
Detection [0.6700873164609007]
Federated Learning (FL) has recently emerged as a promising method to overcome data privacy and transmission issues.
In FL, datasets collected from different devices or sensors are used to train local models (clients) each of which shares its learning with a centralized model (server)
This paper proposes a novel Personalized FedAvg (PC-FedAvg) which aims to control weights communication and aggregation augmented with a tailored learning algorithm to personalize the resulting models at each client.
arXiv Detail & Related papers (2021-11-04T04:57:11Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.