Learning-based Privacy-Preserving Graph Publishing Against Sensitive Link Inference Attacks
- URL: http://arxiv.org/abs/2507.21139v1
- Date: Wed, 23 Jul 2025 04:19:29 GMT
- Title: Learning-based Privacy-Preserving Graph Publishing Against Sensitive Link Inference Attacks
- Authors: Yucheng Wu, Yuncong Yang, Xiao Han, Leye Wang, Junjie Wu,
- Abstract summary: We propose the first privacy-preserving graph structure learning framework against sensitive link inference attacks.<n>The framework, named PPGSL, can automatically learn a graph with the optimal privacy--utility trade-off.<n>The PPGSL achieves state-of-the-art privacy--utility trade-off performance and effectively thwarts various sensitive link inference attacks.
- Score: 14.766917415961348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Publishing graph data is widely desired to enable a variety of structural analyses and downstream tasks. However, it also potentially poses severe privacy leakage, as attackers may leverage the released graph data to launch attacks and precisely infer private information such as the existence of hidden sensitive links in the graph. Prior studies on privacy-preserving graph data publishing relied on heuristic graph modification strategies and it is difficult to determine the graph with the optimal privacy--utility trade-off for publishing. In contrast, we propose the first privacy-preserving graph structure learning framework against sensitive link inference attacks, named PPGSL, which can automatically learn a graph with the optimal privacy--utility trade-off. The PPGSL operates by first simulating a powerful surrogate attacker conducting sensitive link attacks on a given graph. It then trains a parameterized graph to defend against the simulated adversarial attacks while maintaining the favorable utility of the original graph. To learn the parameters of both parts of the PPGSL, we introduce a secure iterative training protocol. It can enhance privacy preservation and ensure stable convergence during the training process, as supported by the theoretical proof. Additionally, we incorporate multiple acceleration techniques to improve the efficiency of the PPGSL in handling large-scale graphs. The experimental results confirm that the PPGSL achieves state-of-the-art privacy--utility trade-off performance and effectively thwarts various sensitive link inference attacks.
Related papers
- Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols [46.94619400437805]
This work introduces the first data poisoning attack targeting locally private graph learning protocols.<n>The attacker injects fake users into the protocol, manipulates these fake users to establish links with genuine users, and sends carefully crafted data to the server.<n>The effectiveness of the attack is demonstrated both theoretically and empirically.
arXiv Detail & Related papers (2025-06-11T14:46:11Z) - Cluster-Aware Attacks on Graph Watermarks [50.19105800063768]
We introduce a cluster-aware threat model in which adversaries apply community-guided modifications to evade detection.<n>Our results show that cluster-aware attacks can reduce attribution accuracy by up to 80% more than random baselines.<n>We propose a lightweight embedding enhancement that distributes watermark nodes across graph communities.
arXiv Detail & Related papers (2025-04-24T22:49:28Z) - GraphTheft: Quantifying Privacy Risks in Graph Prompt Learning [1.2255617580795168]
Graph Prompt Learning (GPL) represents an innovative approach in graph representation learning, enabling task-specific adaptations by finetuning prompts without altering the underlying pre-trained model.
Despite its growing prominence, the privacy risks inherent inPL remain unexplored.
We provide the first evaluation of privacy leakage in across three attacker capabilities: black-box attacks when capabilities as a service, and scenarios where node embeddings and prompt representations are accessible to third parties.
arXiv Detail & Related papers (2024-11-22T04:10:49Z) - GCON: Differentially Private Graph Convolutional Network via Objective Perturbation [27.279817693305183]
Graph Convolutional Networks (GCNs) are a popular machine learning model with a wide range of applications in graph analytics.<n>GCNs trained without privacy protection measures may memorize private interpersonal relationships in the training data.<n>This poses a substantial risk of compromising privacy through link attacks, potentially leading to violations of privacy regulations such as as.<n>We propose GCON, a novel and effective solution for training GCNs with edge differential privacy.
arXiv Detail & Related papers (2024-07-06T09:59:56Z) - Privacy-Preserving Graph Embedding based on Local Differential Privacy [26.164722283887333]
We introduce a novel privacy-preserving graph embedding framework, named PrivGE, to protect node data privacy.
Specifically, we propose an LDP mechanism to obfuscate node data and utilize personalized PageRank as the proximity measure to learn node representations.
Experiments on several real-world graph datasets demonstrate that PrivGE achieves an optimal balance between privacy and utility.
arXiv Detail & Related papers (2023-10-17T08:06:08Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.