Sylva: Tailoring Personalized Adversarial Defense in Pre-trained Models via Collaborative Fine-tuning
- URL: http://arxiv.org/abs/2506.05402v1
- Date: Wed, 04 Jun 2025 03:31:32 GMT
- Title: Sylva: Tailoring Personalized Adversarial Defense in Pre-trained Models via Collaborative Fine-tuning
- Authors: Tianyu Qi, Lei Xue, Yufeng Zhan, Xiaobo Ma,
- Abstract summary: textitSylva is a collaborative adversarial training framework designed to deliver customized defense models for each client.<n>In Phase 1, textitSylva employs LoRA for local adversarial fine-tuning, enabling clients to personalize model robustness.<n>In Phase 2, a game-based layer selection strategy is introduced to enhance accuracy on benign data.
- Score: 6.026535484454967
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing adoption of large pre-trained models in edge computing has made deploying model inference on mobile clients both practical and popular. These devices are inherently vulnerable to direct adversarial attacks, which pose a substantial threat to the robustness and security of deployed models. Federated adversarial training (FAT) has emerged as an effective solution to enhance model robustness while preserving client privacy. However, FAT frequently produces a generalized global model, which struggles to address the diverse and heterogeneous data distributions across clients, resulting in insufficiently personalized performance, while also encountering substantial communication challenges during the training process. In this paper, we propose \textit{Sylva}, a personalized collaborative adversarial training framework designed to deliver customized defense models for each client through a two-phase process. In Phase 1, \textit{Sylva} employs LoRA for local adversarial fine-tuning, enabling clients to personalize model robustness while drastically reducing communication costs by uploading only LoRA parameters during federated aggregation. In Phase 2, a game-based layer selection strategy is introduced to enhance accuracy on benign data, further refining the personalized model. This approach ensures that each client receives a tailored defense model that balances robustness and accuracy effectively. Extensive experiments on benchmark datasets demonstrate that \textit{Sylva} can achieve up to 50$\times$ improvements in communication efficiency compared to state-of-the-art algorithms, while achieving up to 29.5\% and 50.4\% enhancements in adversarial robustness and benign accuracy, respectively.
Related papers
- \ extit{FedABC}: Attention-Based Client Selection for Federated Learning with Long-Term View [5.072601407613152]
Federated Learning (FL) allows decentralized clients to collaboratively train an AI model without directly sharing their data, preserving privacy.<n>Clients train local models on private data and share model updates, which a central server aggregates to refine the global model and redistribute it for the next iteration.<n>We propose textitFedABC, an innovative client selection algorithm designed to take a long-term view in managing data heterogeneity and optimizing client participation.
arXiv Detail & Related papers (2025-07-28T14:22:43Z) - Optimal Transport-Guided Source-Free Adaptation for Face Anti-Spoofing [58.56017169759816]
We introduce a novel method in which the face anti-spoofing model can be adapted by the client itself to a target domain at test time.<n>Specifically, we develop a prototype-based base model and an optimal transport-guided adaptor.<n>In cross-domain and cross-attack settings, compared with recent methods, our method achieves average relative improvements of 19.17% in HTER and 8.58% in AUC.
arXiv Detail & Related papers (2025-03-29T06:10:34Z) - Robust Federated Learning Against Poisoning Attacks: A GAN-Based Defense Framework [0.6554326244334868]
Federated Learning (FL) enables collaborative model training across decentralized devices without sharing raw data.<n>We propose a privacy-preserving defense framework that leverages a Conditional Generative Adversarial Network (cGAN) to generate synthetic data at the server for authenticating client updates.
arXiv Detail & Related papers (2025-03-26T18:00:56Z) - Robust Federated Learning in the Face of Covariate Shift: A Magnitude Pruning with Hybrid Regularization Framework for Enhanced Model Aggregation [1.519321208145928]
Federated Learning (FL) offers a promising framework for individuals aiming to collaboratively develop a shared model.<n> variations in data distribution among clients can profoundly affect FL methodologies, primarily due to instabilities in the aggregation process.<n>We propose a novel FL framework, combining individual parameter pruning and regularization techniques to improve the robustness of individual clients' models to aggregate.
arXiv Detail & Related papers (2024-12-19T16:22:37Z) - A Hybrid Defense Strategy for Boosting Adversarial Robustness in Vision-Language Models [9.304845676825584]
We propose a novel adversarial training framework that integrates multiple attack strategies and advanced machine learning techniques.
Experiments conducted on real-world datasets, including CIFAR-10 and CIFAR-100, demonstrate that the proposed method significantly enhances model robustness.
arXiv Detail & Related papers (2024-10-18T23:47:46Z) - FedAA: A Reinforcement Learning Perspective on Adaptive Aggregation for Fair and Robust Federated Learning [5.622065847054885]
Federated Learning (FL) has emerged as a promising approach for privacy-preserving model training across decentralized devices.<n>We introduce a novel method called textbfFedAA, which optimize client contributions via textbfAdaptive textbfAggregation to enhance model robustness against malicious clients.
arXiv Detail & Related papers (2024-02-08T10:22:12Z) - Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness [52.9493817508055]
We propose Pre-trained Model Guided Adversarial Fine-Tuning (PMG-AFT) to enhance the model's zero-shot adversarial robustness.
Our approach consistently improves clean accuracy by an average of 8.72%.
arXiv Detail & Related papers (2024-01-09T04:33:03Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Interpolated Joint Space Adversarial Training for Robust and
Generalizable Defenses [82.3052187788609]
Adversarial training (AT) is considered to be one of the most reliable defenses against adversarial attacks.
Recent works show generalization improvement with adversarial samples under novel threat models.
We propose a novel threat model called Joint Space Threat Model (JSTM)
Under JSTM, we develop novel adversarial attacks and defenses.
arXiv Detail & Related papers (2021-12-12T21:08:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.