Privacy Auditing of Multi-domain Graph Pre-trained Model under Membership Inference Attacks
- URL: http://arxiv.org/abs/2511.17989v1
- Date: Sat, 22 Nov 2025 09:04:58 GMT
- Title: Privacy Auditing of Multi-domain Graph Pre-trained Model under Membership Inference Attacks
- Authors: Jiayi Luo, Qingyun Sun, Yuecen Wei, Haonan Yuan, Xingcheng Fu, Jianxin Li,
- Abstract summary: We propose MGP-MIA, a framework for Membership Inference Attacks against Multi-domain Graph Pre-trained models.<n>We first propose a membership signal amplification mechanism that amplifies the overfitting characteristics of target models.<n>We then design an incremental shadow model construction mechanism that builds a reliable shadow model with limited shadow graphs.
- Score: 27.853332299363913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-domain graph pre-training has emerged as a pivotal technique in developing graph foundation models. While it greatly improves the generalization of graph neural networks, its privacy risks under membership inference attacks (MIAs), which aim to identify whether a specific instance was used in training (member), remain largely unexplored. However, effectively conducting MIAs against multi-domain graph pre-trained models is a significant challenge due to: (i) Enhanced Generalization Capability: Multi-domain pre-training reduces the overfitting characteristics commonly exploited by MIAs. (ii) Unrepresentative Shadow Datasets: Diverse training graphs hinder the obtaining of reliable shadow graphs. (iii) Weakened Membership Signals: Embedding-based outputs offer less informative cues than logits for MIAs. To tackle these challenges, we propose MGP-MIA, a novel framework for Membership Inference Attacks against Multi-domain Graph Pre-trained models. Specifically, we first propose a membership signal amplification mechanism that amplifies the overfitting characteristics of target models via machine unlearning. We then design an incremental shadow model construction mechanism that builds a reliable shadow model with limited shadow graphs via incremental learning. Finally, we introduce a similarity-based inference mechanism that identifies members based on their similarity to positive and negative samples. Extensive experiments demonstrate the effectiveness of our proposed MGP-MIA and reveal the privacy risks of multi-domain graph pre-training.
Related papers
- A Systematic Study of Model Extraction Attacks on Graph Foundation Models [32.616928898012624]
This paper presents the first systematic study of model extraction attacks (MEAs) against Graph Foundation Models (GFMs)<n>We introduce a lightweight extraction method that trains an attacker encoder using supervised regression of graph embeddings.<n>Experiments show that the attacker can approximate the victim model using only a tiny fraction of its original training cost, with almost no loss in accuracy.
arXiv Detail & Related papers (2025-11-14T22:43:42Z) - Neural Breadcrumbs: Membership Inference Attacks on LLMs Through Hidden State and Attention Pattern Analysis [9.529147118376464]
Membership inference attacks (MIAs) reveal whether specific data was used to train machine learning models.<n>Our work explores how examining internal representations, rather than just their outputs, may provide additional insights into potential membership inference signals.<n>Our findings suggest that internal model behaviors can reveal aspects of training data exposure even when output-based signals appear protected.
arXiv Detail & Related papers (2025-09-05T19:05:49Z) - Aggregation-aware MLP: An Unsupervised Approach for Graph Message-passing [10.93155007218297]
"AMLP" is an unsupervised framework that shifts the paradigm from directly crafting aggregation functions to making adaptive aggregation.<n>Our approach consists of two key steps: First, we utilize a graph reconstruction that facilitates high-order grouping effects, and second, we employ a single-layer network to encode varying degrees of heterophily.
arXiv Detail & Related papers (2025-07-27T04:52:55Z) - Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction [88.65168366064061]
We introduce Discrete Denoising Posterior Prediction (DDPP), a novel framework that casts the task of steering pre-trained MDMs as a problem of probabilistic inference.
Our framework leads to a family of three novel objectives that are all simulation-free, and thus scalable.
We substantiate our designs via wet-lab validation, where we observe transient expression of reward-optimized protein sequences.
arXiv Detail & Related papers (2024-10-10T17:18:30Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - CORE: Data Augmentation for Link Prediction via Information Bottleneck [25.044734252779975]
Link prediction (LP) is a fundamental task in graph representation learning.
We propose a novel data augmentation method, COmplete and REduce (CORE) to learn compact and predictive augmentations for LP models.
arXiv Detail & Related papers (2024-04-17T03:20:42Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Adversarial Attacks on Graph Classification via Bayesian Optimisation [25.781404695921122]
We present a novel optimisation-based attack method for graph classification models.
Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied.
We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks.
arXiv Detail & Related papers (2021-11-04T13:01:20Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.