FedProphet: Memory-Efficient Federated Adversarial Training via Theoretic-Robustness and Low-Inconsistency Cascade Learning
- URL: http://arxiv.org/abs/2409.08372v1
- Date: Thu, 12 Sep 2024 19:39:14 GMT
- Title: FedProphet: Memory-Efficient Federated Adversarial Training via Theoretic-Robustness and Low-Inconsistency Cascade Learning
- Authors: Minxue Tang, Yitu Wang, Jingyang Zhang, Louis DiValentin, Aolin Ding, Amin Hass, Yiran Chen, Hai "Helen" Li,
- Abstract summary: Federated Learning (FL) provides a strong privacy guarantee by enabling local training across edge devices without training data sharing.
FedProphet is a novel FAT framework that can achieve memory efficiency, adversarial robustness, and objective consistency simultaneously.
- Score: 20.075335314952643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) provides a strong privacy guarantee by enabling local training across edge devices without training data sharing, and Federated Adversarial Training (FAT) further enhances the robustness against adversarial examples, promoting a step toward trustworthy artificial intelligence. However, FAT requires a large model to preserve high accuracy while achieving strong robustness, and it is impractically slow when directly training with memory-constrained edge devices due to the memory-swapping latency. Moreover, existing memory-efficient FL methods suffer from poor accuracy and weak robustness in FAT because of inconsistent local and global models, i.e., objective inconsistency. In this paper, we propose FedProphet, a novel FAT framework that can achieve memory efficiency, adversarial robustness, and objective consistency simultaneously. FedProphet partitions the large model into small cascaded modules such that the memory-constrained devices can conduct adversarial training module-by-module. A strong convexity regularization is derived to theoretically guarantee the robustness of the whole model, and we show that the strong robustness implies low objective inconsistency in FedProphet. We also develop a training coordinator on the server of FL, with Adaptive Perturbation Adjustment for utility-robustness balance and Differentiated Module Assignment for objective inconsistency mitigation. FedProphet empirically shows a significant improvement in both accuracy and robustness compared to previous memory-efficient methods, achieving almost the same performance of end-to-end FAT with 80% memory reduction and up to 10.8x speedup in training time.
Related papers
- TPFL: A Trustworthy Personalized Federated Learning Framework via Subjective Logic [13.079535924498977]
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy.
Most FL approaches focusing solely on privacy protection fall short in scenarios where trustworthiness is crucial.
We introduce Trustworthy Personalized Federated Learning framework designed for classification tasks via subjective logic.
arXiv Detail & Related papers (2024-10-16T07:33:29Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - The Effectiveness of Random Forgetting for Robust Generalization [21.163070161951868]
We introduce a novel learning paradigm called "Forget to Mitigate Overfitting" (FOMO)
FOMO alternates between the forgetting phase, which randomly forgets a subset of weights, and the relearning phase, which emphasizes learning generalizable features.
Our experiments show that FOMO alleviates robust overfitting by significantly reducing the gap between the best and last robust test accuracy.
arXiv Detail & Related papers (2024-02-18T23:14:40Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Reliable Federated Disentangling Network for Non-IID Domain Feature [62.73267904147804]
In this paper, we propose a novel reliable federated disentangling network, termed RFedDis.
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling.
Our proposed RFedDis provides outstanding performance with a high degree of reliability as compared to other state-of-the-art FL approaches.
arXiv Detail & Related papers (2023-01-30T11:46:34Z) - Strength-Adaptive Adversarial Training [103.28849734224235]
Adversarial training (AT) is proven to reliably improve network's robustness against adversarial data.
Current AT with a pre-specified perturbation budget has limitations in learning a robust network.
We propose emphStrength-Adaptive Adversarial Training (SAAT) to overcome these limitations.
arXiv Detail & Related papers (2022-10-04T00:22:37Z) - FADE: Enabling Federated Adversarial Training on Heterogeneous
Resource-Constrained Edge Devices [36.01066121818574]
We propose a new framework named Federated Adversarial Decoupled Learning (FADE) to enable AT on resource-constrained edge devices.
FADE differentially decouples the entire model into small modules to fit into the resource budget of each device.
We show that FADE can significantly reduce the consumption of memory and computing power while maintaining accuracy and robustness.
arXiv Detail & Related papers (2022-09-08T14:22:49Z) - Federated Learning with Sparsified Model Perturbation: Improving
Accuracy under Client-Level Differential Privacy [27.243322019117144]
Federated learning (FL) enables distributed clients to collaboratively learn a shared statistical model.
sensitive information about the training data can still be inferred from model updates shared in FL.
Differential privacy (DP) is the state-of-the-art technique to defend against those attacks.
This paper develops a novel FL scheme named Fed-SMP that provides client-level DP guarantee while maintaining high model accuracy.
arXiv Detail & Related papers (2022-02-15T04:05:42Z) - MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the
Edge [72.16021611888165]
This paper proposes a novel Memory-Economic Sparse Training (MEST) framework targeting for accurate and fast execution on edge devices.
The proposed MEST framework consists of enhancements by Elastic Mutation (EM) and Soft Memory Bound (&S)
Our results suggest that unforgettable examples can be identified in-situ even during the dynamic exploration of sparsity masks.
arXiv Detail & Related papers (2021-10-26T21:15:17Z) - Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness
and Accuracy for Free [115.81899803240758]
Adversarial training and its many variants substantially improve deep network robustness, yet at the cost of compromising standard accuracy.
This paper asks how to quickly calibrate a trained model in-situ, to examine the achievable trade-offs between its standard and robust accuracies.
Our proposed framework, Once-for-all Adversarial Training (OAT), is built on an innovative model-conditional training framework.
arXiv Detail & Related papers (2020-10-22T16:06:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.