On the Detectability of Active Gradient Inversion Attacks in Federated Learning
- URL: http://arxiv.org/abs/2511.10502v1
- Date: Fri, 14 Nov 2025 01:54:53 GMT
- Title: On the Detectability of Active Gradient Inversion Attacks in Federated Learning
- Authors: Vincenzo Carletti, Pasquale Foggia, Carlo Mazzocca, Giuseppe Parrella, Mario Vento,
- Abstract summary: Federated Learning (FL) can collaboratively train a Machine Learning (ML) model while keeping clients' data on-site.<n>However, prior studies have shown that gradients exchanged during the FL training remain vulnerable to Gradient Inversion Attacks (GIAs)<n>These attacks allow reconstructing the clients' local data, breaking the privacy promise of FL.<n>Recent, novel active GIAs have emerged, claiming to be far stealthier than previous approaches.
- Score: 5.828517827413101
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the key advantages of Federated Learning (FL) is its ability to collaboratively train a Machine Learning (ML) model while keeping clients' data on-site. However, this can create a false sense of security. Despite not sharing private data increases the overall privacy, prior studies have shown that gradients exchanged during the FL training remain vulnerable to Gradient Inversion Attacks (GIAs). These attacks allow reconstructing the clients' local data, breaking the privacy promise of FL. GIAs can be launched by either a passive or an active server. In the latter case, a malicious server manipulates the global model to facilitate data reconstruction. While effective, earlier attacks falling under this category have been demonstrated to be detectable by clients, limiting their real-world applicability. Recently, novel active GIAs have emerged, claiming to be far stealthier than previous approaches. This work provides the first comprehensive analysis of these claims, investigating four state-of-the-art GIAs. We propose novel lightweight client-side detection techniques, based on statistically improbable weight structures and anomalous loss and gradient dynamics. Extensive evaluation across several configurations demonstrates that our methods enable clients to effectively detect active GIAs without any modifications to the FL training protocol.
Related papers
- Hear No Evil: Detecting Gradient Leakage by Malicious Servers in Federated Learning [35.64232606410778]
gradient updates in federated learning can unintentionally reveal sensitive information about a client's local data.<n>This paper provides the first comprehensive analysis of malicious gradient leakage attacks and the model manipulation techniques that enable them.<n>We propose a simple, lightweight, and broadly applicable client-side detection mechanism that flags suspicious model updates before local training begins.
arXiv Detail & Related papers (2025-06-25T17:49:26Z) - Byzantine Outside, Curious Inside: Reconstructing Data Through Malicious Updates [36.2911560725828]
Federated learning (FL) enables decentralized machine learning without sharing raw data.<n>Privacy leakage is possible under commonly adopted FL protocols.<n>We introduce a novel threat model in FL, named the maliciously curious client.
arXiv Detail & Related papers (2025-06-13T02:23:41Z) - Geminio: Language-Guided Gradient Inversion Attacks in Federated Learning [18.326636715274372]
vision-language models (VLMs) can be weaponized to enhance gradient inversion attacks (GIAs) in federated learning (FL)<n>We propose Geminio, the first approach to transform GIAs into semantically meaningful, targeted attacks.<n>It enables a brand new privacy attack experience: attackers can describe, in natural language, the data they consider valuable, and Geminio will prioritize reconstruction to focus on those high-value samples.
arXiv Detail & Related papers (2024-11-22T13:49:56Z) - Attribute Inference Attacks for Federated Regression Tasks [14.152503562997662]
Federated Learning (FL) enables clients to collaboratively train a global machine learning model while keeping their data localized.<n>Recent studies have revealed that the training phase of FL is vulnerable to reconstruction attacks.<n>We propose novel model-based AIAs specifically designed for regression tasks in FL environments.
arXiv Detail & Related papers (2024-11-19T18:06:06Z) - GI-NAS: Boosting Gradient Inversion Attacks Through Adaptive Neural Architecture Search [52.27057178618773]
Gradient Inversion Attacks invert the transmitted gradients in Federated Learning (FL) systems to reconstruct the sensitive data of local clients.<n>A majority of gradient inversion methods rely heavily on explicit prior knowledge, which is often unavailable in realistic scenarios.<n>We propose Neural Architecture Search (GI-NAS), which adaptively searches the network and captures the implicit priors behind neural architectures.
arXiv Detail & Related papers (2024-05-31T09:29:43Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.