FairGU: Fairness-aware Graph Unlearning in Social Networks
- URL: http://arxiv.org/abs/2601.09469v2
- Date: Sun, 18 Jan 2026 10:03:48 GMT
- Title: FairGU: Fairness-aware Graph Unlearning in Social Networks
- Authors: Renqiang Luo, Yongshuai Yang, Huafei Huang, Qing Qing, Mingliang Hou, Ziqi Xu, Yi Yu, Jingjing Zhou, Feng Xia,
- Abstract summary: We introduce FairGU, a fairness-aware graph unlearning framework.<n>FairGU integrates a dedicated fairness-aware module with effective data protection strategies.<n>We demonstrate that FairGU consistently outperforms state-of-the-art graph unlearning methods.
- Score: 17.116462601803544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph unlearning has emerged as a critical mechanism for supporting sustainable and privacy-preserving social networks, enabling models to remove the influence of deleted nodes and thereby better safeguard user information. However, we observe that existing graph unlearning techniques insufficiently protect sensitive attributes, often leading to degraded algorithmic fairness compared with traditional graph learning methods. To address this gap, we introduce FairGU, a fairness-aware graph unlearning framework designed to preserve both utility and fairness during the unlearning process. FairGU integrates a dedicated fairness-aware module with effective data protection strategies, ensuring that sensitive attributes are neither inadvertently amplified nor structurally exposed when nodes are removed. Through extensive experiments on multiple real-world datasets, we demonstrate that FairGU consistently outperforms state-of-the-art graph unlearning methods and fairness-enhanced graph learning baselines in terms of both accuracy and fairness metrics. Our findings highlight a previously overlooked risk in current unlearning practices and establish FairGU as a robust and equitable solution for the next generation of socially sustainable networked systems. The codes are available at https://github.com/LuoRenqiang/FairGU.
Related papers
- Fairness-Aware Graph Representation Learning with Limited Demographic Information [12.550140478205842]
We introduce a novel fair graph learning framework that mitigates bias under limited demographic information.<n>Specifically, we propose a mechanism guided by partial demographic data to generate proxies for demographic information.<n>We also develop an adaptive confidence strategy that dynamically adjusts each node's contribution to fairness and utility.
arXiv Detail & Related papers (2025-11-17T16:14:28Z) - Enabling Group Fairness in Graph Unlearning via Bi-level Debiasing [11.879507789144062]
Graph unlearning is a crucial approach for protecting user privacy by erasing the influence of user data on trained graph models.<n>Recent developments in graph unlearning methods have primarily focused on maintaining model prediction performance while removing user information.<n>We propose a fair graph unlearning method, FGU, to ensure fairness while maintaining privacy and accuracy.
arXiv Detail & Related papers (2025-05-14T18:04:02Z) - FROG: Fair Removal on Graphs [31.295786898354837]
We propose a novel framework that jointly optimize both the graph structure and the model to achieve fair unlearning.<n>Our method rewires the graph by removing redundant edges that hinder forgetting while preserving fairness through targeted edge augmentation.<n>Experiments on real-world datasets show that our approach achieves more effective and fair unlearning than existing baselines.
arXiv Detail & Related papers (2025-03-23T20:39:53Z) - MAPPING: Debiasing Graph Neural Networks for Fair Node Classification with Limited Sensitive Information Leakage [1.5438758943381854]
We propose a novel model-agnostic debiasing framework named MAPPING for fair node classification.<n>Our results show that MAPPING can achieve better trade-offs between utility and fairness, and privacy risks of sensitive information leakage.
arXiv Detail & Related papers (2024-01-23T14:59:46Z) - GraphGuard: Detecting and Counteracting Training Data Misuse in Graph
Neural Networks [69.97213941893351]
The emergence of Graph Neural Networks (GNNs) in graph data analysis has raised critical concerns about data misuse during model training.
Existing methodologies address either data misuse detection or mitigation, and are primarily designed for local GNN models.
This paper introduces a pioneering approach called GraphGuard, to tackle these challenges.
arXiv Detail & Related papers (2023-12-13T02:59:37Z) - Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.