Security-Preserving Federated Learning via Byzantine-Sensitive Triplet
Distance
- URL: http://arxiv.org/abs/2210.16519v1
- Date: Sat, 29 Oct 2022 07:20:02 GMT
- Title: Security-Preserving Federated Learning via Byzantine-Sensitive Triplet
Distance
- Authors: Youngjoon Lee, Sangwoo Park, Joonhyuk Kang
- Abstract summary: Federated learning (FL) is generally vulnerable to Byzantine attacks from adversarial edge devices.
We propose an effective Byzantine-robust FL framework, namely dummy contrastive aggregation.
We show improved performance as compared to the state-of-the-art Byzantine-resilient aggregation methods.
- Score: 10.658882342481542
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While being an effective framework of learning a shared model across multiple
edge devices, federated learning (FL) is generally vulnerable to Byzantine
attacks from adversarial edge devices. While existing works on FL mitigate such
compromised devices by only aggregating a subset of the local models at the
server side, they still cannot successfully ignore the outliers due to
imprecise scoring rule. In this paper, we propose an effective Byzantine-robust
FL framework, namely dummy contrastive aggregation, by defining a novel scoring
function that sensitively discriminates whether the model has been poisoned or
not. Key idea is to extract essential information from every local models along
with the previous global model to define a distance measure in a manner similar
to triplet loss. Numerical results validate the advantage of the proposed
approach by showing improved performance as compared to the state-of-the-art
Byzantine-resilient aggregation methods, e.g., Krum, Trimmed-mean, and Fang.
Related papers
- Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - FedRDF: A Robust and Dynamic Aggregation Function against Poisoning
Attacks in Federated Learning [0.0]
Federated Learning (FL) represents a promising approach to typical privacy concerns associated with centralized Machine Learning (ML) deployments.
Despite its well-known advantages, FL is vulnerable to security attacks such as Byzantine behaviors and poisoning attacks.
Our proposed approach was tested against various model poisoning attacks, demonstrating superior performance over state-of-the-art aggregation methods.
arXiv Detail & Related papers (2024-02-15T16:42:04Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Robust Outlier Rejection for 3D Registration with Variational Bayes [70.98659381852787]
We develop a novel variational non-local network-based outlier rejection framework for robust alignment.
We propose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estimation.
arXiv Detail & Related papers (2023-04-04T03:48:56Z) - STDLens: Model Hijacking-Resilient Federated Learning for Object
Detection [13.895922908738507]
Federated Learning (FL) has been gaining popularity as a collaborative learning framework to train deep learning-based object detection models over a distributed population of clients.
Despite its advantages, FL is vulnerable to model hijacking.
This paper introduces STDLens, a principled approach to safeguarding FL against such attacks.
arXiv Detail & Related papers (2023-03-21T00:15:53Z) - Federated Zero-Shot Learning for Visual Recognition [55.65879596326147]
We propose a novel Federated Zero-Shot Learning FedZSL framework.
FedZSL learns a central model from the decentralized data residing on edge devices.
The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.
arXiv Detail & Related papers (2022-09-05T14:49:34Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Byzantine-Robust Federated Learning via Credibility Assessment on
Non-IID Data [1.4146420810689422]
Federated learning is a novel framework that enables resource-constrained edge devices to jointly learn a model.
Standard federated learning is vulnerable to Byzantine attacks.
We propose a Byzantine-robust framework for federated learning via credibility assessment on non-iid data.
arXiv Detail & Related papers (2021-09-06T12:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.