ABC-FL: Anomalous and Benign client Classification in Federated Learning
- URL: http://arxiv.org/abs/2108.04551v2
- Date: Wed, 11 Aug 2021 03:53:29 GMT
- Title: ABC-FL: Anomalous and Benign client Classification in Federated Learning
- Authors: Hyejun Jeong, Joonyong Hwang, Tai Myung Chung
- Abstract summary: Federated Learning is a distributed machine learning framework designed for data privacy preservation.
It inherits the vulnerabilities and susceptibilities raised in deep learning techniques.
It is difficult to correctly identify malicious clients due to the non-Independently and/or Identically Distributed (non-IID) data.
We propose a method that detects and classifies anomalous clients from benign clients when benign ones have non-IID data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated Learning is a distributed machine learning framework designed for
data privacy preservation i.e., local data remain private throughout the entire
training and testing procedure. Federated Learning is gaining popularity
because it allows one to use machine learning techniques while preserving
privacy. However, it inherits the vulnerabilities and susceptibilities raised
in deep learning techniques. For instance, Federated Learning is particularly
vulnerable to data poisoning attacks that may deteriorate its performance and
integrity due to its distributed nature and inaccessibility to the raw data. In
addition, it is extremely difficult to correctly identify malicious clients due
to the non-Independently and/or Identically Distributed (non-IID) data. The
real-world data can be complex and diverse, making them hardly distinguishable
from the malicious data without direct access to the raw data. Prior research
has focused on detecting malicious clients while treating only the clients
having IID data as benign. In this study, we propose a method that detects and
classifies anomalous clients from benign clients when benign ones have non-IID
data. Our proposed method leverages feature dimension reduction, dynamic
clustering, and cosine similarity-based clipping. The experimental results
validates that our proposed method not only classifies the malicious clients
but also alleviates their negative influences from the entire procedure. Our
findings may be used in future studies to effectively eliminate anomalous
clients when building a model with diverse data.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Federated Causal Discovery from Heterogeneous Data [70.31070224690399]
We propose a novel FCD method attempting to accommodate arbitrary causal models and heterogeneous data.
These approaches involve constructing summary statistics as a proxy of the raw data to protect data privacy.
We conduct extensive experiments on synthetic and real datasets to show the efficacy of our method.
arXiv Detail & Related papers (2024-02-20T18:53:53Z) - DCFL: Non-IID awareness Data Condensation aided Federated Learning [0.8158530638728501]
Federated learning is a decentralized learning paradigm wherein a central server trains a global model iteratively by utilizing clients who possess a certain amount of private datasets.
The challenge lies in the fact that the client side private data may not be identically and independently distributed.
We propose DCFL which divides clients into groups by using the Centered Kernel Alignment (CKA) method, then uses dataset condensation methods with non-IID awareness to complete clients.
arXiv Detail & Related papers (2023-12-21T13:04:24Z) - FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against
Adversarial Attacks [1.689369173057502]
Federated learning has created a decentralized method to train a machine learning model without needing direct access to client data.
malicious clients are able to corrupt the global model and degrade performance across all clients within a federation.
Our novel aggregation method, FedBayes, mitigates the effect of a malicious client by calculating the probabilities of a client's model weights.
arXiv Detail & Related papers (2023-12-04T21:37:50Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Better Generative Replay for Continual Federated Learning [20.57194599280318]
Federated learning is a technique that enables a centralized server to learn from distributed clients via communications.
In this paper, we introduce the problem of continual federated learning, where clients incrementally learn new tasks and history data cannot be stored.
We propose our FedCIL model with two simple but effective solutions: model consolidation and consistency enforcement.
arXiv Detail & Related papers (2023-02-25T06:26:56Z) - FedCC: Robust Federated Learning against Model Poisoning Attacks [0.0]
Federated Learning is designed to address privacy concerns in learning models.
New distributed paradigm safeguards data privacy but differentiates the attack surface due to the server's inaccessibility to local datasets.
arXiv Detail & Related papers (2022-12-05T01:52:32Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Auto-weighted Robust Federated Learning with Corrupted Data Sources [7.475348174281237]
Federated learning provides a communication-efficient and privacy-preserving training process.
Standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions.
We propose Auto-weighted Robust Federated Learning (arfl) to provide robustness against corrupted data sources.
arXiv Detail & Related papers (2021-01-14T21:54:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.