Fairness and/or Privacy on Social Graphs
- URL: http://arxiv.org/abs/2503.02114v1
- Date: Mon, 03 Mar 2025 22:56:32 GMT
- Title: Fairness and/or Privacy on Social Graphs
- Authors: Bartlomiej Surma, Michael Backes, Yang Zhang,
- Abstract summary: Graph Neural Networks (GNNs) have shown remarkable success in various graph-based learning tasks.<n>Recent studies have raised concerns about fairness and privacy issues in GNNs.
- Score: 22.617029702304016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have shown remarkable success in various graph-based learning tasks. However, recent studies have raised concerns about fairness and privacy issues in GNNs, highlighting the potential for biased or discriminatory outcomes and the vulnerability of sensitive information. This paper presents a comprehensive investigation of fairness and privacy in GNNs, exploring the impact of various fairness-preserving measures on model performance. We conduct experiments across diverse datasets and evaluate the effectiveness of different fairness interventions. Our analysis considers the trade-offs between fairness, privacy, and accuracy, providing insights into the challenges and opportunities in achieving both fair and private graph learning. The results highlight the importance of carefully selecting and combining fairness-preserving measures based on the specific characteristics of the data and the desired fairness objectives. This study contributes to a deeper understanding of the complex interplay between fairness, privacy, and accuracy in GNNs, paving the way for the development of more robust and ethical graph learning models.
Related papers
- One Fits All: Learning Fair Graph Neural Networks for Various Sensitive Attributes [40.57757706386367]
We propose a graph fairness framework based on invariant learning, namely FairINV.
FairINV incorporates sensitive attribute partition and trains fair GNNs by eliminating spurious correlations between the label and various sensitive attributes.
Experimental results on several real-world datasets demonstrate that FairINV significantly outperforms state-of-the-art fairness approaches.
arXiv Detail & Related papers (2024-06-19T13:30:17Z) - Holistic Survey of Privacy and Fairness in Machine Learning [10.399352534861292]
Privacy and fairness are crucial pillars of responsible Artificial Intelligence (AI) and trustworthy Machine Learning (ML)
Despite significant interest, there remains an immediate demand for more in-depth research to unravel how these two objectives can be simultaneously integrated into ML models.
We provide a thorough review of privacy and fairness in ML, including supervised, unsupervised, semi-supervised, and reinforcement learning.
arXiv Detail & Related papers (2023-07-28T23:39:29Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks [66.0143583366533]
Graph neural networks (GNNs) have gained significant attraction due to their expansive real-world applications.
To build trustworthy GNNs, two aspects - fairness and privacy - have emerged as critical considerations.
Previous studies have separately examined the fairness and privacy aspects of GNNs, revealing their trade-off with GNN performance.
Yet, the interplay between these two aspects remains unexplored.
arXiv Detail & Related papers (2023-01-30T14:52:23Z) - Privacy-preserving Graph Analytics: Secure Generation and Federated
Learning [72.90158604032194]
We focus on the privacy-preserving analysis of graph data, which provides the crucial capacity to represent rich attributes and relationships.
We discuss two directions, namely privacy-preserving graph generation and federated graph learning, which can jointly enable the collaboration among multiple parties each possessing private graph data.
arXiv Detail & Related papers (2022-06-30T18:26:57Z) - A Survey of Trustworthy Graph Learning: Reliability, Explainability, and
Privacy Protection [136.71290968343826]
Trustworthy graph learning (TwGL) aims to solve the above problems from a technical viewpoint.
In contrast to conventional graph learning research which mainly cares about model performance, TwGL considers various reliability and safety aspects.
arXiv Detail & Related papers (2022-05-20T08:10:35Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.