FADE: Enabling Federated Adversarial Training on Heterogeneous
Resource-Constrained Edge Devices
- URL: http://arxiv.org/abs/2209.03839v2
- Date: Wed, 26 Apr 2023 00:46:58 GMT
- Title: FADE: Enabling Federated Adversarial Training on Heterogeneous
Resource-Constrained Edge Devices
- Authors: Minxue Tang, Jianyi Zhang, Mingyuan Ma, Louis DiValentin, Aolin Ding,
Amin Hassanzadeh, Hai Li, Yiran Chen
- Abstract summary: We propose a new framework named Federated Adversarial Decoupled Learning (FADE) to enable AT on resource-constrained edge devices.
FADE differentially decouples the entire model into small modules to fit into the resource budget of each device.
We show that FADE can significantly reduce the consumption of memory and computing power while maintaining accuracy and robustness.
- Score: 36.01066121818574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated adversarial training can effectively complement adversarial
robustness into the privacy-preserving federated learning systems. However, the
high demand for memory capacity and computing power makes large-scale federated
adversarial training infeasible on resource-constrained edge devices. Few
previous studies in federated adversarial training have tried to tackle both
memory and computational constraints simultaneously. In this paper, we propose
a new framework named Federated Adversarial Decoupled Learning (FADE) to enable
AT on heterogeneous resource-constrained edge devices. FADE differentially
decouples the entire model into small modules to fit into the resource budget
of each device, and each device only needs to perform AT on a single module in
each communication round. We also propose an auxiliary weight decay to
alleviate objective inconsistency and achieve better accuracy-robustness
balance in FADE. FADE offers theoretical guarantees for convergence and
adversarial robustness, and our experimental results show that FADE can
significantly reduce the consumption of memory and computing power while
maintaining accuracy and robustness.
Related papers
- FedProphet: Memory-Efficient Federated Adversarial Training via Theoretic-Robustness and Low-Inconsistency Cascade Learning [20.075335314952643]
Federated Learning (FL) provides a strong privacy guarantee by enabling local training across edge devices without training data sharing.
FedProphet is a novel FAT framework that can achieve memory efficiency, adversarial robustness, and objective consistency simultaneously.
arXiv Detail & Related papers (2024-09-12T19:39:14Z) - Resource Management for Low-latency Cooperative Fine-tuning of Foundation Models at the Network Edge [35.40849522296486]
Large-scale foundation models (FoMos) can perform human-like intelligence.
FoMos need to be adapted to specialized downstream tasks through fine-tuning techniques.
We advocate multi-device cooperation within the device-edge cooperative fine-tuning paradigm.
arXiv Detail & Related papers (2024-07-13T12:47:14Z) - Filling the Missing: Exploring Generative AI for Enhanced Federated
Learning over Heterogeneous Mobile Edge Devices [72.61177465035031]
We propose a generative AI-empowered federated learning to address these challenges by leveraging the idea of FIlling the MIssing (FIMI) portion of local data.
Experiment results demonstrate that FIMI can save up to 50% of the device-side energy to achieve the target global test accuracy.
arXiv Detail & Related papers (2023-10-21T12:07:04Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous
Edge Devices [20.52519915112099]
We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates.
Experiment results indicate that, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy.
arXiv Detail & Related papers (2023-01-08T15:25:55Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Ternary Compression for Communication-Efficient Federated Learning [17.97683428517896]
Federated learning provides a potential solution to privacy-preserving and secure machine learning.
We propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems.
Our results show that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data.
arXiv Detail & Related papers (2020-03-07T11:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.