Federated Behavioural Planes: Explaining the Evolution of Client Behaviour in Federated Learning
- URL: http://arxiv.org/abs/2405.15632v2
- Date: Sun, 13 Oct 2024 16:40:29 GMT
- Title: Federated Behavioural Planes: Explaining the Evolution of Client Behaviour in Federated Learning
- Authors: Dario Fenoglio, Gabriele Dominici, Pietro Barbiero, Alberto Tonda, Martin Gjoreski, Marc Langheinrich,
- Abstract summary: We introduce Federated Behavioural Planes (FBPs), a novel method to analyse, visualise, and explain the dynamics of FL systems.
Our experiments demonstrate that FBPs provide informative trajectories describing the evolving states of clients.
We propose a robust aggregation technique named Federated Behavioural Shields to detect malicious or noisy client models.
- Score: 6.64590374742412
- License:
- Abstract: Federated Learning (FL), a privacy-aware approach in distributed deep learning environments, enables many clients to collaboratively train a model without sharing sensitive data, thereby reducing privacy risks. However, enabling human trust and control over FL systems requires understanding the evolving behaviour of clients, whether beneficial or detrimental for the training, which still represents a key challenge in the current literature. To address this challenge, we introduce Federated Behavioural Planes (FBPs), a novel method to analyse, visualise, and explain the dynamics of FL systems, showing how clients behave under two different lenses: predictive performance (error behavioural space) and decision-making processes (counterfactual behavioural space). Our experiments demonstrate that FBPs provide informative trajectories describing the evolving states of clients and their contributions to the global model, thereby enabling the identification of clusters of clients with similar behaviours. Leveraging the patterns identified by FBPs, we propose a robust aggregation technique named Federated Behavioural Shields to detect malicious or noisy client models, thereby enhancing security and surpassing the efficacy of existing state-of-the-art FL defense mechanisms. Our code is publicly available on GitHub.
Related papers
- Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - Reinforcement Learning as a Catalyst for Robust and Fair Federated
Learning: Deciphering the Dynamics of Client Contributions [6.318638597489423]
Reinforcement Federated Learning (RFL) is a novel framework that leverages deep reinforcement learning to adaptively optimize client contribution during aggregation.
In terms of robustness, RFL outperforms state-of-the-art methods, while maintaining comparable levels of fairness.
arXiv Detail & Related papers (2024-02-08T10:22:12Z) - Addressing Membership Inference Attack in Federated Learning with Model Compression [8.842172558292027]
Federated Learning (FL) has been proposed as a privacy-preserving solution for machine learning.
Recent works have reported that FL can leak private client data through membership inference attacks.
We show that effectiveness of these attacks negatively correlates with the size of the client's datasets and model complexity.
arXiv Detail & Related papers (2023-11-29T15:54:15Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Balancing Privacy Protection and Interpretability in Federated Learning [8.759803233734624]
Federated learning (FL) aims to collaboratively train the global model in a distributed manner by sharing the model parameters from local clients to a central server.
Recent studies have illustrated that FL still suffers from information leakage as adversaries try to recover the training data by analyzing shared parameters from local clients.
We propose a simple yet effective adaptive differential privacy (ADP) mechanism that selectively adds noisy perturbations to the gradients of client models in FL.
arXiv Detail & Related papers (2023-02-16T02:58:22Z) - FedPerm: Private and Robust Federated Learning by Parameter Permutation [2.406359246841227]
Federated Learning (FL) is a distributed learning paradigm that enables mutually untrusting clients to collaboratively train a common machine learning model.
Client data privacy is paramount in FL. At the same time, the model must be protected from poisoning attacks from adversarial clients.
We present FedPerm, a new FL algorithm that addresses both these problems by combining a novel intra-model parameter shuffling technique that amplifies data privacy, with Private Information Retrieval (PIR) based techniques that permit cryptographic aggregation of clients' model updates.
arXiv Detail & Related papers (2022-08-16T19:40:28Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.