FROG: Fair Removal on Graphs
- URL: http://arxiv.org/abs/2503.18197v2
- Date: Thu, 28 Aug 2025 20:13:37 GMT
- Title: FROG: Fair Removal on Graphs
- Authors: Ziheng Chen, Jiali Cheng, Hadi Amiri, Kaushiki Nag, Lu Lin, Xiangguo Sun, Gabriele Tolomei,
- Abstract summary: We propose a novel framework that jointly optimize both the graph structure and the model to achieve fair unlearning.<n>Our method rewires the graph by removing redundant edges that hinder forgetting while preserving fairness through targeted edge augmentation.<n>Experiments on real-world datasets show that our approach achieves more effective and fair unlearning than existing baselines.
- Score: 31.295786898354837
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With growing emphasis on privacy regulations, machine unlearning has become increasingly critical in real-world applications such as social networks and recommender systems, many of which are naturally represented as graphs. However, existing graph unlearning methods often modify nodes or edges indiscriminately, overlooking their impact on fairness. For instance, forgetting links between users of different genders may inadvertently exacerbate group disparities. To address this issue, we propose a novel framework that jointly optimizes both the graph structure and the model to achieve fair unlearning. Our method rewires the graph by removing redundant edges that hinder forgetting while preserving fairness through targeted edge augmentation. We further introduce a worst-case evaluation mechanism to assess robustness under challenging scenarios. Experiments on real-world datasets show that our approach achieves more effective and fair unlearning than existing baselines.
Related papers
- FairGU: Fairness-aware Graph Unlearning in Social Networks [17.116462601803544]
We introduce FairGU, a fairness-aware graph unlearning framework.<n>FairGU integrates a dedicated fairness-aware module with effective data protection strategies.<n>We demonstrate that FairGU consistently outperforms state-of-the-art graph unlearning methods.
arXiv Detail & Related papers (2026-01-14T13:21:39Z) - Unlearning Inversion Attacks for Graph Neural Networks [37.7189834578513]
We introduce the graph unlearning inversion attack: given only black-box access to an unlearned GNN and partial graph knowledge, can an adversary reconstruct the removed edges?<n>We identify two key challenges: varying probability-similarity thresholds for unlearned versus retained edges, and the difficulty of locating unlearned edge endpoints, and address them with TrendAttack.<n>Experiments on four real-world datasets demonstrate that TrendAttack significantly outperforms state-of-the-art GNN membership inference baselines, exposing a critical privacy vulnerability in current graph unlearning methods.
arXiv Detail & Related papers (2025-06-01T03:23:04Z) - FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - Erase then Rectify: A Training-Free Parameter Editing Approach for Cost-Effective Graph Unlearning [17.85404473268992]
Graph unlearning aims to eliminate the influence of nodes, edges, or attributes from a trained Graph Neural Network (GNN)<n>Existing graph unlearning techniques often necessitate additional training on the remaining data, leading to significant computational costs.<n>We propose a two-stage training-free approach, Erase then Rectify (ETR), designed for efficient and scalable graph unlearning.
arXiv Detail & Related papers (2024-09-25T07:20:59Z) - ADEdgeDrop: Adversarial Edge Dropping for Robust Graph Neural Networks [53.41164429486268]
Graph Neural Networks (GNNs) have exhibited the powerful ability to gather graph-structured information from neighborhood nodes.
The performance of GNNs is limited by poor generalization and fragile robustness caused by noisy and redundant graph data.
We propose a novel adversarial edge-dropping method (ADEdgeDrop) that leverages an adversarial edge predictor guiding the removal of edges.
arXiv Detail & Related papers (2024-03-14T08:31:39Z) - MAPPING: Debiasing Graph Neural Networks for Fair Node Classification with Limited Sensitive Information Leakage [1.5438758943381854]
We propose a novel model-agnostic debiasing framework named MAPPING for fair node classification.<n>Our results show that MAPPING can achieve better trade-offs between utility and fairness, and privacy risks of sensitive information leakage.
arXiv Detail & Related papers (2024-01-23T14:59:46Z) - Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - CONVERT:Contrastive Graph Clustering with Reliable Augmentation [110.46658439733106]
We propose a novel CONtrastiVe Graph ClustEring network with Reliable AugmenTation (CONVERT)
In our method, the data augmentations are processed by the proposed reversible perturb-recover network.
To further guarantee the reliability of semantics, a novel semantic loss is presented to constrain the network.
arXiv Detail & Related papers (2023-08-17T13:07:09Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - FairGen: Towards Fair Graph Generation [76.34239875010381]
We propose a fairness-aware graph generative model named FairGen.
Our model jointly trains a label-informed graph generation module and a fair representation learning module.
Experimental results on seven real-world data sets, including web-based graphs, demonstrate that FairGen obtains performance on par with state-of-the-art graph generative models.
arXiv Detail & Related papers (2023-03-30T23:30:42Z) - Drop Edges and Adapt: a Fairness Enforcing Fine-tuning for Graph Neural
Networks [9.362130313618797]
Link prediction algorithms tend to disfavor the links between individuals in specific demographic groups.
This paper proposes a novel way to enforce fairness on graph neural networks with a fine-tuning strategy.
One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning.
arXiv Detail & Related papers (2023-02-22T16:28:08Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Fairness-Aware Node Representation Learning [9.850791193881651]
This study addresses fairness issues in graph contrastive learning with fairness-aware graph augmentation designs.
Different fairness notions on graphs are introduced, which serve as guidelines for the proposed graph augmentations.
Experimental results on real social networks are presented to demonstrate that the proposed augmentations can enhance fairness in terms of statistical parity and equal opportunity.
arXiv Detail & Related papers (2021-06-09T21:12:14Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning [14.664485680918725]
We propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning.
FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions.
We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy.
arXiv Detail & Related papers (2021-04-29T08:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.