The KL-Divergence between a Graph Model and its Fair I-Projection as a
Fairness Regularizer
- URL: http://arxiv.org/abs/2103.01846v1
- Date: Tue, 2 Mar 2021 16:26:37 GMT
- Title: The KL-Divergence between a Graph Model and its Fair I-Projection as a
Fairness Regularizer
- Authors: Maarten Buyl, Tijl De Bie
- Abstract summary: We propose a generic approach applicable to most probabilistic graph modeling approaches.
Specifically, we first define the class of fair graph models corresponding to a chosen set of fairness criteria.
We demonstrate that using this fairness regularizer in combination with existing graph modeling approaches efficiently trades-off fairness with accuracy.
- Score: 17.660861923996016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning and reasoning over graphs is increasingly done by means of
probabilistic models, e.g. exponential random graph models, graph embedding
models, and graph neural networks. When graphs are modeling relations between
people, however, they will inevitably reflect biases, prejudices, and other
forms of inequity and inequality. An important challenge is thus to design
accurate graph modeling approaches while guaranteeing fairness according to the
specific notion of fairness that the problem requires. Yet, past work on the
topic remains scarce, is limited to debiasing specific graph modeling methods,
and often aims to ensure fairness in an indirect manner.
We propose a generic approach applicable to most probabilistic graph modeling
approaches. Specifically, we first define the class of fair graph models
corresponding to a chosen set of fairness criteria. Given this, we propose a
fairness regularizer defined as the KL-divergence between the graph model and
its I-projection onto the set of fair models. We demonstrate that using this
fairness regularizer in combination with existing graph modeling approaches
efficiently trades-off fairness with accuracy, whereas the state-of-the-art
models can only make this trade-off for the fairness criterion that they were
specifically designed for.
Related papers
- From Graph Diffusion to Graph Classification [21.2763549550792]
We show how graph diffusion models can be applied for graph classification.
In experiments with a sampling-based inference method, our discriminative training objective achieves state-of-the-art graph classification accuracy.
arXiv Detail & Related papers (2024-11-26T08:57:41Z) - Fair GLASSO: Estimating Fair Graphical Models with Unbiased Statistical Behavior [31.92791228859847]
Many real-world models exhibit unfair discriminatory behavior due to biases in data.
We introduce fairness for graphical models in the form of two bias metrics to promote balance in statistical similarities.
We present Fair GLASSO, a regularized graphical lasso approach to obtain sparse Gaussian precision matrices.
arXiv Detail & Related papers (2024-06-13T18:07:04Z) - A Causal Disentangled Multi-Granularity Graph Classification Method [18.15154299104419]
Some graph classification methods do not combine the multi-granularity characteristics of graph data.
This paper proposes a causal disentangled multi-granularity graph representation learning method (CDM-GNN) to solve this challenge.
The model exhibits strong classification performance and generates explanatory outcomes aligning with human cognitive patterns.
arXiv Detail & Related papers (2023-10-25T00:20:50Z) - FairGen: Towards Fair Graph Generation [76.34239875010381]
We propose a fairness-aware graph generative model named FairGen.
Our model jointly trains a label-informed graph generation module and a fair representation learning module.
Experimental results on seven real-world data sets, including web-based graphs, demonstrate that FairGen obtains performance on par with state-of-the-art graph generative models.
arXiv Detail & Related papers (2023-03-30T23:30:42Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Counterfactual Fairness with Partially Known Causal Graph [85.15766086381352]
This paper proposes a general method to achieve the notion of counterfactual fairness when the true causal graph is unknown.
We find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided.
arXiv Detail & Related papers (2022-05-27T13:40:50Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Fair Community Detection and Structure Learning in Heterogeneous
Graphical Models [8.643517734716607]
Inference of community structure in probabilistic graphical models may not be consistent with fairness constraints when nodes have demographic attributes.
This paper defines a novel $ell_$-regularized pseudo-likelihood approach for fair graphical model selection.
arXiv Detail & Related papers (2021-12-09T18:58:36Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.