Fairness Through Controlled (Un)Awareness in Node Embeddings
- URL: http://arxiv.org/abs/2407.20024v1
- Date: Mon, 29 Jul 2024 14:01:26 GMT
- Title: Fairness Through Controlled (Un)Awareness in Node Embeddings
- Authors: Dennis Vetter, Jasper Forth, Gemma Roig, Holger Dell,
- Abstract summary: We show how the parametrization of the emphCrossWalk algorithm influences the ability to infer a sensitive attributes from node embeddings.
This functionality offers a valuable tool for improving the fairness of ML systems utilizing graph embeddings.
- Score: 4.818571559544213
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph representation learning is central for the application of machine learning (ML) models to complex graphs, such as social networks. Ensuring `fair' representations is essential, due to the societal implications and the use of sensitive personal data. In this paper, we demonstrate how the parametrization of the \emph{CrossWalk} algorithm influences the ability to infer a sensitive attributes from node embeddings. By fine-tuning hyperparameters, we show that it is possible to either significantly enhance or obscure the detectability of these attributes. This functionality offers a valuable tool for improving the fairness of ML systems utilizing graph embeddings, making them adaptable to different fairness paradigms.
Related papers
- One Fits All: Learning Fair Graph Neural Networks for Various Sensitive Attributes [40.57757706386367]
We propose a graph fairness framework based on invariant learning, namely FairINV.
FairINV incorporates sensitive attribute partition and trains fair GNNs by eliminating spurious correlations between the label and various sensitive attributes.
Experimental results on several real-world datasets demonstrate that FairINV significantly outperforms state-of-the-art fairness approaches.
arXiv Detail & Related papers (2024-06-19T13:30:17Z) - MS-IMAP -- A Multi-Scale Graph Embedding Approach for Interpretable Manifold Learning [1.8124328823188354]
This paper introduces a framework for multi-scale graph network embedding based on spectral graph wavelets.
We show that in Paley-Wiener spaces on graphs, the spectral graph wavelets operator provides greater flexibility and control over smoothness.
An additional key advantage of the proposed embedding is its ability to establish a correspondence between the embedding and input feature spaces.
arXiv Detail & Related papers (2024-06-04T20:48:33Z) - Endowing Pre-trained Graph Models with Provable Fairness [49.8431177748876]
We propose a novel adapter-tuning framework that endows pre-trained graph models with provable fairness (called GraphPAR)
Specifically, we design a sensitive semantic augmenter on node representations, to extend the node representations with different sensitive attribute semantics for each node.
With GraphPAR, we quantify whether the fairness of each node is provable, i.e., predictions are always fair within a certain range of sensitive attribute semantics.
arXiv Detail & Related papers (2024-02-19T14:16:08Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - Joint Feature and Differentiable $ k $-NN Graph Learning using Dirichlet
Energy [103.74640329539389]
We propose a deep FS method that simultaneously conducts feature selection and differentiable $ k $-NN graph learning.
We employ Optimal Transport theory to address the non-differentiability issue of learning $ k $-NN graphs in neural networks.
We validate the effectiveness of our model with extensive experiments on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-21T08:15:55Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Features Based Adaptive Augmentation for Graph Contrastive Learning [0.0]
Self-Supervised learning aims to eliminate the need for expensive annotation in graph representation learning.
We introduce a Feature Based Adaptive Augmentation (FebAA) approach, which identifies and preserves potentially influential features.
We successfully improved the accuracy of GRACE and BGRL on eight graph representation learning's benchmark datasets.
arXiv Detail & Related papers (2022-07-05T03:41:20Z) - Improving Fairness in Graph Neural Networks via Mitigating Sensitive
Attribute Leakage [35.810534649478576]
Graph Neural Networks (GNNs) have shown great power in learning node representations on graphs.
GNNs may inherit historical prejudices from training data, leading to discriminatory bias in predictions.
We propose Fair View Graph Neural Network (FairVGNN) to generate fair views of features by automatically identifying and masking sensitive-correlated features.
arXiv Detail & Related papers (2022-06-07T16:25:20Z) - Fairness-aware Configuration of Machine Learning Libraries [21.416261003364177]
This paper investigates the parameter space of machine learning (ML) algorithms in aggravating or mitigating fairness bugs.
Three search-based software testing algorithms are proposed to uncover the precision-fairness frontier.
arXiv Detail & Related papers (2022-02-13T04:04:33Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.