Exploring Federated Learning Dynamics for Black-and-White-Box DNN Traitor Tracing
- URL: http://arxiv.org/abs/2407.02111v1
- Date: Tue, 2 Jul 2024 09:54:35 GMT
- Title: Exploring Federated Learning Dynamics for Black-and-White-Box DNN Traitor Tracing
- Authors: Elena Rodriguez-Lois, Fernando Perez-Gonzalez,
- Abstract summary: This paper explores the adaptation of black-and-white traitor tracing watermarking to Federated Learning.
Results show that collusion-resistant traitor tracing, identifying all data-owners involved in a suspected leak, is feasible in an FL framework, even in early stages of training.
- Score: 49.1574468325115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As deep learning applications become more prevalent, the need for extensive training examples raises concerns for sensitive, personal, or proprietary data. To overcome this, Federated Learning (FL) enables collaborative model training across distributed data-owners, but it introduces challenges in safeguarding model ownership and identifying the origin in case of a leak. Building upon prior work, this paper explores the adaptation of black-and-white traitor tracing watermarking to FL classifiers, addressing the threat of collusion attacks from different data-owners. This study reveals that leak-resistant white-box fingerprints can be directly implemented without a significant impact from FL dynamics, while the black-box fingerprints are drastically affected, losing their traitor tracing capabilities. To mitigate this effect, we propose increasing the number of black-box salient neurons through dropout regularization. Though there are still some open problems to be explored, such as analyzing non-i.i.d. datasets and over-parameterized models, results show that collusion-resistant traitor tracing, identifying all data-owners involved in a suspected leak, is feasible in an FL framework, even in early stages of training.
Related papers
- Identify Backdoored Model in Federated Learning via Individual Unlearning [7.200910949076064]
Backdoor attacks present a significant threat to the robustness of Federated Learning (FL)
We propose MASA, a method that utilizes individual unlearning on local models to identify malicious models in FL.
To the best of our knowledge, this is the first work to leverage machine unlearning to identify malicious models in FL.
arXiv Detail & Related papers (2024-11-01T21:19:47Z) - SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering [2.2194815687410627]
We show how a malicious client can leak the privacy-sensitive data of some other users in FL even without any cooperation from the server.
Our best-performing method improves the membership inference recall by 29% and achieves up to 71% private data reconstruction.
arXiv Detail & Related papers (2023-10-24T19:50:01Z) - Understanding Deep Gradient Leakage via Inversion Influence Functions [53.1839233598743]
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.
We propose a novel Inversion Influence Function (I$2$F) that establishes a closed-form connection between the recovered images and the private gradients.
We empirically demonstrate that I$2$F effectively approximated the DGL generally on different model architectures, datasets, attack implementations, and perturbation-based defenses.
arXiv Detail & Related papers (2023-09-22T17:26:24Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Gradient Inversion with Generative Image Prior [37.03737843861339]
Federated Learning (FL) is a distributed learning framework in which the local data never leaves clients devices to preserve privacy.
We show that data privacy can be easily breached by exploiting a generative model pretrained on the data distribution.
We experimentally show that the prior in a form of generative model is learnable from iterative interactions in FL.
arXiv Detail & Related papers (2021-10-28T09:04:32Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - Mitigating the Impact of Adversarial Attacks in Very Deep Networks [10.555822166916705]
Deep Neural Network (DNN) models have vulnerabilities related to security concerns.
Data poisoning-enabled perturbation attacks are complex adversarial ones that inject false data into models.
We propose an attack-agnostic-based defense method for mitigating their influence.
arXiv Detail & Related papers (2020-12-08T21:25:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.