EqGNN: Equalized Node Opportunity in Graphs
- URL: http://arxiv.org/abs/2108.08800v1
- Date: Thu, 19 Aug 2021 17:17:24 GMT
- Title: EqGNN: Equalized Node Opportunity in Graphs
- Authors: Uriel Singer and Kira Radinsky
- Abstract summary: Graph neural networks (GNNs) have been widely used for supervised learning tasks in graphs.
Some ignore the sensitive attributes or optimize for the criteria of statistical parity for fairness.
We present a GNN framework that allows optimizing representations for the notion of Equalized Odds fairness criteria.
- Score: 19.64827998759028
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph neural networks (GNNs), has been widely used for supervised learning
tasks in graphs reaching state-of-the-art results. However, little work was
dedicated to creating unbiased GNNs, i.e., where the classification is
uncorrelated with sensitive attributes, such as race or gender. Some ignore the
sensitive attributes or optimize for the criteria of statistical parity for
fairness. However, it has been shown that neither approaches ensure fairness,
but rather cripple the utility of the prediction task. In this work, we present
a GNN framework that allows optimizing representations for the notion of
Equalized Odds fairness criteria. The architecture is composed of three
components: (1) a GNN classifier predicting the utility class, (2) a sampler
learning the distribution of the sensitive attributes of the nodes given their
labels. It generates samples fed into a (3) discriminator that discriminates
between true and sampled sensitive attributes using a novel "permutation loss"
function. Using these components, we train a model to neglect information
regarding the sensitive attribute only with respect to its label. To the best
of our knowledge, we are the first to optimize GNNs for the equalized odds
criteria. We evaluate our classifier over several graph datasets and sensitive
attributes and show our algorithm reaches state-of-the-art results.
Related papers
- Graph Classification with GNNs: Optimisation, Representation and Inductive Bias [0.6445605125467572]
We argue that such equivalence ignores the accompanying optimization issues and does not provide a holistic view of the GNN learning process.
We prove theoretically that the message-passing layers in the graph, have a tendency to search for either discriminative subgraphs, or a collection of discriminative nodes dispersed across the graph.
arXiv Detail & Related papers (2024-08-17T18:15:44Z) - Classifying Nodes in Graphs without GNNs [50.311528896010785]
We propose a fully GNN-free approach for node classification, not requiring them at train or test time.
Our method consists of three key components: smoothness constraints, pseudo-labeling iterations and neighborhood-label histograms.
arXiv Detail & Related papers (2024-02-08T18:59:30Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Improving Fairness in Graph Neural Networks via Mitigating Sensitive
Attribute Leakage [35.810534649478576]
Graph Neural Networks (GNNs) have shown great power in learning node representations on graphs.
GNNs may inherit historical prejudices from training data, leading to discriminatory bias in predictions.
We propose Fair View Graph Neural Network (FairVGNN) to generate fair views of features by automatically identifying and masking sensitive-correlated features.
arXiv Detail & Related papers (2022-06-07T16:25:20Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Fairness via Representation Neutralization [60.90373932844308]
We propose a new mitigation technique, namely, Representation Neutralization for Fairness (RNF)
RNF achieves that fairness by debiasing only the task-specific classification head of DNN models.
Experimental results over several benchmark datasets demonstrate our RNF framework to effectively reduce discrimination of DNN models.
arXiv Detail & Related papers (2021-06-23T22:26:29Z) - GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural
Networks [28.92347073786722]
Graph neural networks (GNNs) have achieved state-of-the-art performance of node classification.
We propose a novel framework, GraphSMOTE, in which an embedding space is constructed to encode the similarity among the nodes.
New samples are synthesize in this space to assure genuineness.
arXiv Detail & Related papers (2021-03-16T03:23:55Z) - Revisiting graph neural networks and distance encoding from a practical
view [10.193375978547019]
Graph neural networks (GNNs) are widely used in the applications based on graph structured data, such as node classification and link prediction.
A recently proposed technique distance encoding (DE) makes GNNs work well in many applications, including node classification and link prediction.
arXiv Detail & Related papers (2020-11-22T22:04:37Z) - Say No to the Discrimination: Learning Fair Graph Neural Networks with
Limited Sensitive Attribute Information [37.90997236795843]
Graph neural networks (GNNs) have shown great power in modeling graph structured data.
GNNs may make predictions biased on protected sensitive attributes, e.g., skin color and gender.
We propose FairGNN to eliminate the bias of GNNs whilst maintaining high node classification accuracy.
arXiv Detail & Related papers (2020-09-03T05:17:30Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.