Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning
- URL: http://arxiv.org/abs/2303.00250v1
- Date: Wed, 1 Mar 2023 06:16:15 GMT
- Title: Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning
- Authors: Jianing Zhu, Jiangchao Yao, Tongliang Liu, Quanming Yao, Jianliang Xu,
Bo Han
- Abstract summary: Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
- Score: 91.88122934924435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy and security concerns in real-world applications have led to the
development of adversarially robust federated models. However, the
straightforward combination between adversarial training and federated learning
in one framework can lead to the undesired robustness deterioration. We
discover that the attribution behind this phenomenon is that the generated
adversarial data could exacerbate the data heterogeneity among local clients,
making the wrapped federated learning perform poorly. To deal with this
problem, we propose a novel framework called Slack Federated Adversarial
Training (SFAT), assigning the client-wise slack during aggregation to combat
the intensified heterogeneity. Theoretically, we analyze the convergence of the
proposed method to properly relax the objective when combining federated
learning and adversarial training. Experimentally, we verify the rationality
and effectiveness of SFAT on various benchmarked and real-world datasets with
different adversarial training and federated optimization methods. The code is
publicly available at https://github.com/ZFancy/SFAT.
Related papers
- Logit Calibration and Feature Contrast for Robust Federated Learning on Non-IID Data [45.11652096723593]
Federated learning (FL) is a privacy-preserving distributed framework for collaborative model training on devices in edge networks.
This paper proposes FatCC, which incorporates local logit underlineCalibration and global feature underlineContrast into the vanilla federated adversarial training process from both logit and feature perspectives.
arXiv Detail & Related papers (2024-04-10T06:35:25Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Delving into the Adversarial Robustness of Federated Learning [41.409961662754405]
In Federated Learning (FL), models are as fragile as centrally trained models against adversarial examples.
We propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT) to improve both accuracy and robustness of FL systems.
arXiv Detail & Related papers (2023-02-19T04:54:25Z) - Ensemble Federated Adversarial Training with Non-IID data [1.5878082907673585]
Adversarial samples can confuse and cheat the client models to achieve malicious purposes.
We introduce a novel Ensemble Federated Adversarial Training Method, termed as EFAT.
Our proposed method achieves promising results compared with solely combining federated learning with adversarial approaches.
arXiv Detail & Related papers (2021-10-26T03:55:20Z) - Federated Self-Supervised Contrastive Learning via Ensemble Similarity
Distillation [42.05438626702343]
This paper investigates the feasibility of learning good representation space with unlabeled client data in a federated scenario.
We propose a novel self-supervised contrastive learning framework that supports architecture-agnostic local training and communication-efficient global aggregation.
arXiv Detail & Related papers (2021-09-29T02:13:22Z) - RobustFed: A Truth Inference Approach for Robust Federated Learning [9.316565110931743]
Federated learning is a framework that enables clients to train a collaboratively global model under a central server's orchestration.
The aggregation step in federated learning is vulnerable to adversarial attacks as the central server cannot manage clients' behavior.
We propose a novel robust aggregation algorithm inspired by the truth inference methods in crowdsourcing.
arXiv Detail & Related papers (2021-07-18T09:34:57Z) - Guided Interpolation for Adversarial Training [73.91493448651306]
As training progresses, the training data becomes less and less attackable, undermining the robustness enhancement.
We propose the guided framework (GIF), which employs the previous epoch's meta information to guide the data's adversarial variants.
Compared with the vanilla mixup, the GIF can provide a higher ratio of attackable data, which is beneficial to the robustness enhancement.
arXiv Detail & Related papers (2021-02-15T03:55:08Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.