Privacy-Aware Cyberterrorism Network Analysis using Graph Neural Networks and Federated Learning
- URL: http://arxiv.org/abs/2505.16371v1
- Date: Thu, 22 May 2025 08:26:09 GMT
- Title: Privacy-Aware Cyberterrorism Network Analysis using Graph Neural Networks and Federated Learning
- Authors: Anas Ali, Mubashar Husain, Peter Hans,
- Abstract summary: Cyberterrorism poses a formidable threat to digital infrastructures.<n>We propose a Privacy-Aware Federated Graph Neural Network framework.<n>GNNs can support large-scale cyber threat detection without compromising on utility, privacy, or robustness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Cyberterrorism poses a formidable threat to digital infrastructures, with increasing reliance on encrypted, decentralized platforms that obscure threat actor activity. To address the challenge of analyzing such adversarial networks while preserving the privacy of distributed intelligence data, we propose a Privacy-Aware Federated Graph Neural Network (PA-FGNN) framework. PA-FGNN integrates graph attention networks, differential privacy, and homomorphic encryption into a robust federated learning pipeline tailored for cyberterrorism network analysis. Each client trains locally on sensitive graph data and exchanges encrypted, noise-perturbed model updates with a central aggregator, which performs secure aggregation and broadcasts global updates. We implement anomaly detection for flagging high-risk nodes and incorporate defenses against gradient poisoning. Experimental evaluations on simulated dark web and cyber-intelligence graphs demonstrate that PA-FGNN achieves over 91\% classification accuracy, maintains resilience under 20\% adversarial client behavior, and incurs less than 18\% communication overhead. Our results highlight that privacy-preserving GNNs can support large-scale cyber threat detection without compromising on utility, privacy, or robustness.
Related papers
- Who Owns This Sample: Cross-Client Membership Inference Attack in Federated Graph Neural Networks [15.801164432263183]
We present the first systematic study of cross-client membership inference attacks (CC-MIA) against node classification tasks of federated GNNs.<n>Our attack targets sample-to-client attribution, a finer-grained privacy risk unique to federated settings.<n>Our findings highlight a new privacy threat in federated graph learning-client identity leakage through structural and model-level cues.
arXiv Detail & Related papers (2025-07-26T14:32:38Z) - Cluster-Aware Attacks on Graph Watermarks [50.19105800063768]
We introduce a cluster-aware threat model in which adversaries apply community-guided modifications to evade detection.<n>Our results show that cluster-aware attacks can reduce attribution accuracy by up to 80% more than random baselines.<n>We propose a lightweight embedding enhancement that distributes watermark nodes across graph communities.
arXiv Detail & Related papers (2025-04-24T22:49:28Z) - CONTINUUM: Detecting APT Attacks through Spatial-Temporal Graph Neural Networks [0.9553673944187253]
Advanced Persistent Threats (APTs) represent a significant challenge in cybersecurity.<n>Traditional Intrusion Detection Systems (IDS) often fall short in detecting these multi-stage attacks.
arXiv Detail & Related papers (2025-01-06T12:43:59Z) - Grimm: A Plug-and-Play Perturbation Rectifier for Graph Neural Networks Defending against Poisoning Attacks [53.972077392749185]
Recent studies have revealed the vulnerability of graph neural networks (GNNs) to adversarial poisoning attacks on node classification tasks.<n>Here we introduce Grimm, the first plug-and-play defense model.
arXiv Detail & Related papers (2024-12-11T17:17:02Z) - Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks [50.87615167799367]
We certify Graph Neural Networks (GNNs) against poisoning attacks, including backdoors, targeting the node features of a given graph.<n>Our framework provides fundamental insights into the role of graph structure and its connectivity on the worst-case behavior of convolution-based and PageRank-based GNNs.
arXiv Detail & Related papers (2024-07-15T16:12:51Z) - GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with
Realistic Access to GNN Models [3.0509197593879844]
This paper investigates edge privacy in contexts where adversaries possess black-box GNN model access.
We introduce a series of privacy attacks grounded on the message-passing mechanism of GNNs.
arXiv Detail & Related papers (2023-11-03T20:26:03Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - A Unified Framework of Graph Information Bottleneck for Robustness and
Membership Privacy [43.11374582152925]
Graph Neural Networks (GNNs) have achieved great success in modeling graph-structured data.
GNNs are vulnerable to adversarial attacks which can fool the GNN model to make desired predictions.
In this work, we study a novel problem of developing robust and membership privacy-preserving GNNs.
arXiv Detail & Related papers (2023-06-14T16:11:00Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)<n>GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.<n>Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Privacy-Preserving Decentralized Inference with Graph Neural Networks in
Wireless Networks [39.99126905067949]
We analyze and enhance the privacy of decentralized inference with graph neural networks in wireless networks.
Specifically, we adopt local differential privacy as the metric, and design novel privacy-preserving signals.
We also adopt the over-the-air technique and theoretically demonstrate its advantage in privacy preservation.
arXiv Detail & Related papers (2022-08-15T01:33:07Z) - NetFense: Adversarial Defenses against Privacy Attacks on Neural
Networks for Graph Data [10.609715843964263]
We propose a novel research task, adversarial defenses against GNN-based privacy attacks.
We present a graph perturbation-based approach, NetFense, to achieve the goal.
arXiv Detail & Related papers (2021-06-22T15:32:50Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.