FairExpand: Individual Fairness on Graphs with Partial Similarity Information
- URL: http://arxiv.org/abs/2512.18180v1
- Date: Sat, 20 Dec 2025 02:33:00 GMT
- Title: FairExpand: Individual Fairness on Graphs with Partial Similarity Information
- Authors: Rebecca Salganik, Yibin Wang, Guillaume Salha-Galvan, Jian Kang,
- Abstract summary: Individual fairness has garnered traction in graph representation learning due to its practical importance in high-stakes Web areas such as user modeling, recommender systems, and search.<n>We introduce FairExpand, a flexible framework that promotes individual fairness in this more realistic partial information scenario.<n>Extensive experiments show that FairExpand consistently enhances individual fairness while preserving performance.
- Score: 5.592017886897037
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Individual fairness, which requires that similar individuals should be treated similarly by algorithmic systems, has become a central principle in fair machine learning. Individual fairness has garnered traction in graph representation learning due to its practical importance in high-stakes Web areas such as user modeling, recommender systems, and search. However, existing methods assume the existence of predefined similarity information over all node pairs, an often unrealistic requirement that prevents their operationalization in practice. In this paper, we assume the similarity information is only available for a limited subset of node pairs and introduce FairExpand, a flexible framework that promotes individual fairness in this more realistic partial information scenario. FairExpand follows a two-step pipeline that alternates between refining node representations using a backbone model (e.g., a graph neural network) and gradually propagating similarity information, which allows fairness enforcement to effectively expand to the entire graph. Extensive experiments show that FairExpand consistently enhances individual fairness while preserving performance, making it a practical solution for enabling graph-based individual fairness in real-world applications with partial similarity information.
Related papers
- Estimating Fair Graphs from Graph-Stationary Data [58.94389691379349]
We consider group and individual fairness for graphs corresponding to group- and node-level definitions.<n>To evaluate the fairness of a given graph, we provide multiple bias metrics, including novel measurements in the spectral domain.<n>One variant of FairSpecTemp exploits commutativity properties of graph stationarity while directly constraining bias.<n>The other implicitly encourages fair estimates by restricting bias in the graph spectrum and is thus more flexible.
arXiv Detail & Related papers (2025-10-08T20:51:57Z) - A Benchmark for Fairness-Aware Graph Learning [58.515305543487386]
We present an extensive benchmark on ten representative fairness-aware graph learning methods.
Our in-depth analysis reveals key insights into the strengths and limitations of existing methods.
arXiv Detail & Related papers (2024-07-16T18:43:43Z) - GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint [28.70963753478329]
algorithmic fairness in Graph Neural Networks (GNNs) has attracted significant attention.<n>We propose a novel method, GFairHint, which promotes individual fairness in GNNs.<n>GFairHint achieves the best fairness results in almost all combinations of datasets with various backbone models.
arXiv Detail & Related papers (2023-05-25T00:03:22Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Graph Learning with Localized Neighborhood Fairness [32.301270877134]
We introduce the notion of neighborhood fairness and develop a computational framework for learning such locally fair embeddings.
We demonstrate the effectiveness of the proposed neighborhood fairness framework for a variety of graph machine learning tasks including fair link prediction, link classification, and learning fair graph embeddings.
arXiv Detail & Related papers (2022-12-22T21:20:43Z) - FairMILE: Towards an Efficient Framework for Fair Graph Representation
Learning [4.75624470851544]
We study the problem of efficient fair graph representation learning and propose a novel framework FairMILE.
FairMILE is a multi-level paradigm that can efficiently learn graph representations while enforcing fairness and preserving utility.
arXiv Detail & Related papers (2022-11-17T22:52:10Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - SoFaiR: Single Shot Fair Representation Learning [24.305894478899948]
SoFaiR is a single shot fair representation learning method that generates with one trained model many points on the fairness-information plane.
We find on three datasets that SoFaiR achieves similar fairness-information trade-offs as its multi-shot counterparts.
arXiv Detail & Related papers (2022-04-26T19:31:30Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Latent Space Smoothing for Individually Fair Representations [12.739528232133495]
We introduce LASSI, the first representation learning method for certifying individual fairness of high-dimensional data.
Our key insight is to leverage recent advances in generative modeling to capture the set of similar individuals in the generative latent space.
We employ randomized smoothing to provably map similar individuals close together, in turn ensuring that local robustness verification of the downstream application results in end-to-end fairness certification.
arXiv Detail & Related papers (2021-11-26T18:22:42Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Learning Certified Individually Fair Representations [15.416929083117596]
A desirable family of fairness constraints, each requiring similar treatment for similar individuals, is known as individual fairness.
We introduce the first method that enables data consumers to obtain certificates of individual fairness for existing and new data points.
arXiv Detail & Related papers (2020-02-24T15:41:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.