Learning Fairness-aware Relational Structures
- URL: http://arxiv.org/abs/2002.09471v1
- Date: Fri, 21 Feb 2020 18:53:52 GMT
- Title: Learning Fairness-aware Relational Structures
- Authors: Yue Zhang, Arti Ramesh
- Abstract summary: We introduce Fair-A3SL, a fairness-aware structure learning algorithm for learning relational structures.
Our results show that Fair-A3SL can learn fair, yet interpretable and expressive structures capable of making accurate predictions.
- Score: 13.712395104755783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of fair machine learning models that effectively avert bias
and discrimination is an important problem that has garnered attention in
recent years. The necessity of encoding complex relational dependencies among
the features and variables for competent predictions require the development of
fair, yet expressive relational models. In this work, we introduce Fair-A3SL, a
fairness-aware structure learning algorithm for learning relational structures,
which incorporates fairness measures while learning relational graphical model
structures. Our approach is versatile in being able to encode a wide range of
fairness metrics such as statistical parity difference, overestimation,
equalized odds, and equal opportunity, including recently proposed relational
fairness measures. While existing approaches employ the fairness measures on
pre-determined model structures post prediction, Fair-A3SL directly learns the
structure while optimizing for the fairness measures and hence is able to
remove any structural bias in the model. We demonstrate the effectiveness of
our learned model structures when compared with the state-of-the-art fairness
models quantitatively and qualitatively on datasets representing three
different modeling scenarios: i) a relational dataset, ii) a recidivism
prediction dataset widely used in studying discrimination, and iii) a
recommender systems dataset. Our results show that Fair-A3SL can learn fair,
yet interpretable and expressive structures capable of making accurate
predictions.
Related papers
- From Efficiency to Equity: Measuring Fairness in Preference Learning [3.2132738637761027]
We evaluate fairness in preference learning models inspired by economic theories of inequality and Rawlsian justice.
We propose metrics adapted from the Gini Coefficient, Atkinson Index, and Kuznets Ratio to quantify fairness in these models.
arXiv Detail & Related papers (2024-10-24T15:25:56Z) - Enhancing Fairness in Neural Networks Using FairVIC [0.0]
Mitigating bias in automated decision-making systems, specifically deep learning models, is a critical challenge in achieving fairness.
We introduce FairVIC, an innovative approach designed to enhance fairness in neural networks by addressing inherent biases at the training stage.
We observe a significant improvement in fairness across all metrics tested, without compromising the model's accuracy to a detrimental extent.
arXiv Detail & Related papers (2024-04-28T10:10:21Z) - Flow Factorized Representation Learning [109.51947536586677]
We introduce a generative model which specifies a distinct set of latent probability paths that define different input transformations.
We show that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously being closer to approximately equivariant models.
arXiv Detail & Related papers (2023-09-22T20:15:37Z) - UniDiff: Advancing Vision-Language Models with Generative and
Discriminative Learning [86.91893533388628]
This paper presents UniDiff, a unified multi-modal model that integrates image-text contrastive learning (ITC), text-conditioned image synthesis learning (IS), and reciprocal semantic consistency modeling (RSC)
UniDiff demonstrates versatility in both multi-modal understanding and generative tasks.
arXiv Detail & Related papers (2023-06-01T15:39:38Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Bias-inducing geometries: an exactly solvable data model with fairness
implications [13.690313475721094]
We introduce an exactly solvable high-dimensional model of data imbalance.
We analytically unpack the typical properties of learning models trained in this synthetic framework.
We obtain exact predictions for the observables that are commonly employed for fairness assessment.
arXiv Detail & Related papers (2022-05-31T16:27:57Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - FAIR: Fair Adversarial Instance Re-weighting [0.7829352305480285]
We propose a Fair Adrial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions.
To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide interpretable information about fairness of individual instances.
arXiv Detail & Related papers (2020-11-15T10:48:56Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Fairness by Explicability and Adversarial SHAP Learning [0.0]
We propose a new definition of fairness that emphasises the role of an external auditor and model explicability.
We develop a framework for mitigating model bias using regularizations constructed from the SHAP values of an adversarial surrogate model.
We demonstrate our approaches using gradient and adaptive boosting on: a synthetic dataset, the UCI Adult (Census) dataset and a real-world credit scoring dataset.
arXiv Detail & Related papers (2020-03-11T14:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.