Deceptive Fairness Attacks on Graphs via Meta Learning
- URL: http://arxiv.org/abs/2310.15653v1
- Date: Tue, 24 Oct 2023 09:10:14 GMT
- Title: Deceptive Fairness Attacks on Graphs via Meta Learning
- Authors: Jian Kang, Yinglong Xia, Ross Maciejewski, Jiebo Luo, Hanghang Tong
- Abstract summary: We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
- Score: 102.53029537886314
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We study deceptive fairness attacks on graphs to answer the following
question: How can we achieve poisoning attacks on a graph learning model to
exacerbate the bias deceptively? We answer this question via a bi-level
optimization problem and propose a meta learning-based framework named FATE.
FATE is broadly applicable with respect to various fairness definitions and
graph learning models, as well as arbitrary choices of manipulation operations.
We further instantiate FATE to attack statistical parity and individual
fairness on graph neural networks. We conduct extensive experimental
evaluations on real-world datasets in the task of semi-supervised node
classification. The experimental results demonstrate that FATE could amplify
the bias of graph neural networks with or without fairness consideration while
maintaining the utility on the downstream task. We hope this paper provides
insights into the adversarial robustness of fair graph learning and can shed
light on designing robust and fair graph learning in future studies.
Related papers
- A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - Graph Fairness Learning under Distribution Shifts [33.9878682279549]
Graph neural networks (GNNs) have achieved remarkable performance on graph-structured data.
GNNs may inherit prejudice from the training data and make discriminatory predictions based on sensitive attributes, such as gender and race.
We propose a graph generator to produce numerous graphs with significant bias and under different distances.
arXiv Detail & Related papers (2024-01-30T06:51:24Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Unsupervised Graph Poisoning Attack via Contrastive Loss
Back-propagation [18.671374133506838]
We propose a novel unsupervised gradient-based adversarial attack that does not rely on labels for graph contrastive learning.
Our attack outperforms unsupervised baseline attacks and has comparable performance with supervised attacks in multiple downstream tasks.
arXiv Detail & Related papers (2022-01-20T03:32:21Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Adversarial Attack Framework on Graph Embedding Models with Limited
Knowledge [126.32842151537217]
Existing works usually perform the attack in a white-box fashion.
We demand to attack various kinds of graph embedding models with black-box driven.
We prove that GF-Attack can perform an effective attack without knowing the number of layers of graph embedding models.
arXiv Detail & Related papers (2021-05-26T09:18:58Z) - Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning [14.664485680918725]
We propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning.
FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions.
We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy.
arXiv Detail & Related papers (2021-04-29T08:59:36Z) - A Survey of Adversarial Learning on Graphs [59.21341359399431]
We investigate and summarize the existing works on graph adversarial learning tasks.
Specifically, we survey and unify the existing works w.r.t. attack and defense in graph analysis tasks.
We emphasize the importance of related evaluation metrics, investigate and summarize them comprehensively.
arXiv Detail & Related papers (2020-03-10T12:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.