From Models to Network Topologies: A Topology Inference Attack in Decentralized Federated Learning
- URL: http://arxiv.org/abs/2501.03119v2
- Date: Fri, 09 May 2025 08:49:26 GMT
- Title: From Models to Network Topologies: A Topology Inference Attack in Decentralized Federated Learning
- Authors: Chao Feng, Yuanzhe Gao, Alberto Huertas Celdran, Gerome Bovet, Burkhard Stiller,
- Abstract summary: This work uncovers the hidden risks of DFL topologies by proposing a novel Topology Inference Attack.<n>A taxonomy of topology inference attacks is introduced, categorizing them by the attacker's capabilities and knowledge.<n>The results demonstrate that analyzing only the model of each node can accurately infer the DFL topology, highlighting a critical privacy risk in DFL systems.
- Score: 7.228253116465784
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated Learning (FL) is widely recognized as a privacy-preserving machine learning paradigm due to its model-sharing mechanism that avoids direct data exchange. Nevertheless, model training leaves exploitable traces that can be used to infer sensitive information. In Decentralized FL (DFL), the topology, defining how participants are connected, plays a crucial role in shaping the model's privacy, robustness, and convergence. However, the topology introduces an unexplored vulnerability: attackers can exploit it to infer participant relationships and launch targeted attacks. This work uncovers the hidden risks of DFL topologies by proposing a novel Topology Inference Attack that infers the topology solely from model behavior. A taxonomy of topology inference attacks is introduced, categorizing them by the attacker's capabilities and knowledge. Practical attack strategies are designed for various scenarios, and experiments are conducted to identify key factors influencing attack success. The results demonstrate that analyzing only the model of each node can accurately infer the DFL topology, highlighting a critical privacy risk in DFL systems. These findings offer valuable insights for improving privacy preservation in DFL environments.
Related papers
- On Evaluating the Poisoning Robustness of Federated Learning under Local Differential Privacy [12.421658400381082]
We propose a novel and model poisoning attack framework tailored for LDPFL settings.<n>Our approach is driven by the objective of maximizing the global training loss while adhering to local privacy constraints.<n>We evaluate our framework across three representative LDPFL protocols, three benchmark datasets, and two types of deep neural networks.
arXiv Detail & Related papers (2025-09-05T17:23:03Z) - Exploiting Edge Features for Transferable Adversarial Attacks in Distributed Machine Learning [54.26807397329468]
This work explores a previously overlooked vulnerability in distributed deep learning systems.<n>An adversary who intercepts the intermediate features transmitted between them can still pose a serious threat.<n>We propose an exploitation strategy specifically designed for distributed settings.
arXiv Detail & Related papers (2025-07-09T20:09:00Z) - Toward Realistic Adversarial Attacks in IDS: A Novel Feasibility Metric for Transferability [0.0]
Transferability-based adversarial attacks exploit the ability of adversarial examples to deceive a specific source Intrusion Detection System (IDS) model.
These attacks exploit common vulnerabilities in machine learning models to bypass security measures and compromise systems.
This paper analyzes the core factors that contribute to transferability, including feature alignment, model architectural similarity, and overlap in the data distributions that each IDS examines.
arXiv Detail & Related papers (2025-04-11T12:15:03Z) - Performance Analysis of Decentralized Federated Learning Deployments [1.7249361224827533]
Decentralized Federated Learning (DFL) is introduced to address these challenges.
It facilitates direct collaboration among participating devices without relying on a central server.
This work explores crucial factors influencing the convergence and generalization capacity of DFL models.
arXiv Detail & Related papers (2025-03-14T19:37:13Z) - Scrutinizing the Vulnerability of Decentralized Learning to Membership Inference Attacks [1.5993362488149794]
We study the vulnerability to Membership Inference Attacks -- MIA -- in decentralized learning systems.<n>Our key finding is that the vulnerability to MIA is heavily correlated to the local model mixing strategy performed by each node.<n>Our paper draws a set of lessons learned for devising decentralized learning systems that reduce by design the vulnerability to MIA.
arXiv Detail & Related papers (2024-12-17T12:02:47Z) - A New Federated Learning Framework Against Gradient Inversion Attacks [17.3044168511991]
Federated Learning (FL) aims to protect data privacy by enabling clients to collectively train machine learning models without sharing their raw data.<n>Recent studies demonstrate that information exchanged during FL is subject to Gradient Inversion Attacks (GIA)
arXiv Detail & Related papers (2024-12-10T04:53:42Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.<n>Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.<n>This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning [1.9632700283749582]
This paper introduces a novel defense mechanism against backdoor attacks in federated learning, named GANcrop.
Experimental findings demonstrate that GANcrop effectively safeguards against backdoor attacks, particularly in non-IID scenarios.
arXiv Detail & Related papers (2024-05-31T09:33:16Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Fake or Compromised? Making Sense of Malicious Clients in Federated
Learning [15.91062695812289]
We present a comprehensive analysis of the various poisoning attacks and defensive aggregation rules (AGRs) proposed in the literature.
To connect existing adversary models, we present a hybrid adversary model, which lies in the middle of the spectrum of adversaries.
We aim to provide practitioners and researchers with a clear understanding of the different types of threats they need to consider when designing FL systems.
arXiv Detail & Related papers (2024-03-10T21:37:21Z) - Enhancing Security in Federated Learning through Adaptive
Consensus-Based Model Update Validation [2.28438857884398]
This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks.
We propose a consensus-based verification process integrated with an adaptive thresholding mechanism.
Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience.
arXiv Detail & Related papers (2024-03-05T20:54:56Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - FSAR: Federated Skeleton-based Action Recognition with Adaptive Topology
Structure and Knowledge Distillation [23.0771949978506]
Existing skeleton-based action recognition methods typically follow a centralized learning paradigm, which can pose privacy concerns when exposing human-related videos.
We introduce a novel Federated Skeleton-based Action Recognition (FSAR) paradigm, which enables the construction of a globally generalized model without accessing local sensitive data.
arXiv Detail & Related papers (2023-06-19T16:18:14Z) - FedCC: Robust Federated Learning against Model Poisoning Attacks [0.0]
Federated Learning is designed to address privacy concerns in learning models.
New distributed paradigm safeguards data privacy but differentiates the attack surface due to the server's inaccessibility to local datasets.
arXiv Detail & Related papers (2022-12-05T01:52:32Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.