Estimating Fair Graphs from Graph-Stationary Data
- URL: http://arxiv.org/abs/2510.07536v1
- Date: Wed, 08 Oct 2025 20:51:57 GMT
- Title: Estimating Fair Graphs from Graph-Stationary Data
- Authors: Madeline Navarro, Andrei Buciulea, Samuel Rey, Antonio G. Marques, Santiago Segarra,
- Abstract summary: We consider group and individual fairness for graphs corresponding to group- and node-level definitions.<n>To evaluate the fairness of a given graph, we provide multiple bias metrics, including novel measurements in the spectral domain.<n>One variant of FairSpecTemp exploits commutativity properties of graph stationarity while directly constraining bias.<n>The other implicitly encourages fair estimates by restricting bias in the graph spectrum and is thus more flexible.
- Score: 58.94389691379349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We estimate fair graphs from graph-stationary nodal observations such that connections are not biased with respect to sensitive attributes. Edges in real-world graphs often exhibit preferences for connecting certain pairs of groups. Biased connections can not only exacerbate but even induce unfair treatment for downstream graph-based tasks. We therefore consider group and individual fairness for graphs corresponding to group- and node-level definitions, respectively. To evaluate the fairness of a given graph, we provide multiple bias metrics, including novel measurements in the spectral domain. Furthermore, we propose Fair Spectral Templates (FairSpecTemp), an optimization-based method with two variants for estimating fair graphs from stationary graph signals, a general model for graph data subsuming many existing ones. One variant of FairSpecTemp exploits commutativity properties of graph stationarity while directly constraining bias, while the other implicitly encourages fair estimates by restricting bias in the graph spectrum and is thus more flexible. Our methods enjoy high probability performance bounds, yielding a conditional tradeoff between fairness and accuracy. In particular, our analysis reveals that accuracy need not be sacrificed to recover fair graphs. We evaluate FairSpecTemp on synthetic and real-world data sets to illustrate its effectiveness and highlight the advantages of both variants of FairSpecTemp.
Related papers
- Mitigating topology biases in Graph Diffusion via Counterfactual Intervention [12.080488237241907]
Graph diffusion models often inherit and amplify topology biases from sensitive attributes, leading to unfair synthetic graphs.<n>We propose Fair Graph Diffusion Model (FairGDiff), a counterfactual-based one-step solution that mitigates topology biases while balancing fairness and utility.<n>We show that FairGDiff achieves a superior trade-off between fairness and utility, outperforming existing fair graph generation methods.
arXiv Detail & Related papers (2026-03-02T15:55:07Z) - Fair GLASSO: Estimating Fair Graphical Models with Unbiased Statistical Behavior [31.92791228859847]
Many real-world models exhibit unfair discriminatory behavior due to biases in data.
We introduce fairness for graphical models in the form of two bias metrics to promote balance in statistical similarities.
We present Fair GLASSO, a regularized graphical lasso approach to obtain sparse Gaussian precision matrices.
arXiv Detail & Related papers (2024-06-13T18:07:04Z) - Endowing Pre-trained Graph Models with Provable Fairness [49.8431177748876]
We propose a novel adapter-tuning framework that endows pre-trained graph models with provable fairness (called GraphPAR)
Specifically, we design a sensitive semantic augmenter on node representations, to extend the node representations with different sensitive attribute semantics for each node.
With GraphPAR, we quantify whether the fairness of each node is provable, i.e., predictions are always fair within a certain range of sensitive attribute semantics.
arXiv Detail & Related papers (2024-02-19T14:16:08Z) - Graph Fairness Learning under Distribution Shifts [33.9878682279549]
Graph neural networks (GNNs) have achieved remarkable performance on graph-structured data.
GNNs may inherit prejudice from the training data and make discriminatory predictions based on sensitive attributes, such as gender and race.
We propose a graph generator to produce numerous graphs with significant bias and under different distances.
arXiv Detail & Related papers (2024-01-30T06:51:24Z) - Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - Fairness-aware Optimal Graph Filter Design [25.145533328758614]
Graphs are mathematical tools that can be used to represent complex real-world interconnected systems.
Machine learning (ML) over graphs has attracted significant attention recently.
We take a fresh look at the problem of bias mitigation in graph-based learning by borrowing insights from graph signal processing.
arXiv Detail & Related papers (2023-10-22T22:40:40Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Recovering the Unbiased Scene Graphs from the Biased Ones [99.24441932582195]
We show that due to the missing labels, scene graph generation (SGG) can be viewed as a "Learning from Positive and Unlabeled data" (PU learning) problem.
We propose Dynamic Label Frequency Estimation (DLFE) to take advantage of training-time data augmentation and average over multiple training iterations to introduce more valid examples.
Extensive experiments show that DLFE is more effective in estimating label frequencies than a naive variant of the traditional estimate, and DLFE significantly alleviates the long tail.
arXiv Detail & Related papers (2021-07-05T16:10:41Z) - Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning [14.664485680918725]
We propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning.
FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions.
We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy.
arXiv Detail & Related papers (2021-04-29T08:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.