HyperFair: A Soft Approach to Integrating Fairness Criteria
- URL: http://arxiv.org/abs/2009.08952v1
- Date: Sat, 5 Sep 2020 05:00:06 GMT
- Title: HyperFair: A Soft Approach to Integrating Fairness Criteria
- Authors: Charles Dickens, Rishika Singh, Lise Getoor
- Abstract summary: We introduce HyperFair, a framework for enforcing soft fairness constraints in a hybrid recommender system.
We propose two ways to employ the methods we introduce: first as an extension of a probabilistic soft logic recommender system template.
We empirically validate our approach by implementing multiple HyperFair hybrid recommenders and compare them to a state-of-the-art fair recommender.
- Score: 17.770533330914102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems are being employed across an increasingly diverse set of
domains that can potentially make a significant social and individual impact.
For this reason, considering fairness is a critical step in the design and
evaluation of such systems. In this paper, we introduce HyperFair, a general
framework for enforcing soft fairness constraints in a hybrid recommender
system. HyperFair models integrate variations of fairness metrics as a
regularization of a joint inference objective function. We implement our
approach using probabilistic soft logic and show that it is particularly
well-suited for this task as it is expressive and structural constraints can be
added to the system in a concise and interpretable manner. We propose two ways
to employ the methods we introduce: first as an extension of a probabilistic
soft logic recommender system template; second as a fair retrofitting technique
that can be used to improve the fairness of predictions from a black-box model.
We empirically validate our approach by implementing multiple HyperFair hybrid
recommenders and compare them to a state-of-the-art fair recommender. We also
run experiments showing the effectiveness of our methods for the task of
retrofitting a black-box model and the trade-off between the amount of fairness
enforced and the prediction performance.
Related papers
- FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Dynamic fairness-aware recommendation through multi-agent social choice [10.556124653827647]
We argue that fairness in real-world application settings in general, and especially in the context of personalized recommendation, is much more complex and multi-faceted.
We propose a model to formalize multistakeholder fairness in recommender systems as a two stage social choice problem.
arXiv Detail & Related papers (2023-03-02T05:06:17Z) - Conformalized Fairness via Quantile Regression [8.180169144038345]
We propose a novel framework to learn a real-valued quantile function under the fairness requirement of Demographic Parity.
We establish theoretical guarantees of distribution-free coverage and exact fairness for the induced prediction interval constructed by fair quantiles.
Our results show the model's ability to uncover the mechanism underlying the fairness-accuracy trade-off in a wide range of societal and medical applications.
arXiv Detail & Related papers (2022-10-05T04:04:15Z) - Balancing Accuracy and Fairness for Interactive Recommendation with
Reinforcement Learning [68.25805655688876]
Fairness in recommendation has attracted increasing attention due to bias and discrimination possibly caused by traditional recommenders.
We propose a reinforcement learning based framework, FairRec, to dynamically maintain a long-term balance between accuracy and fairness in IRS.
Extensive experiments validate that FairRec can improve fairness, while preserving good recommendation quality.
arXiv Detail & Related papers (2021-06-25T02:02:51Z) - "And the Winner Is...": Dynamic Lotteries for Multi-group Fairness-Aware
Recommendation [37.35485045640196]
We argue that the previous literature has been based on simple, uniform and often uni-dimensional notions of fairness assumptions.
We explicitly represent the design decisions that enter into the trade-off between accuracy and fairness across multiply-defined and intersecting protected groups.
We formulate lottery-based mechanisms for choosing between fairness concerns, and demonstrate their performance in two recommendation domains.
arXiv Detail & Related papers (2020-09-05T20:15:14Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Achieving Fairness via Post-Processing in Web-Scale Recommender Systems [6.5191290612443105]
We extend the definitions of fairness to recommender systems, namely equality of opportunity and equalized odds.
We propose scalable methods for achieving equality of opportunity and equalized odds in rankings in the presence of position bias.
arXiv Detail & Related papers (2020-06-19T20:12:13Z) - Fair Bayesian Optimization [25.80374249896801]
We introduce a general constrained Bayesian optimization framework to optimize the performance of any machine learning (ML) model.
We apply BO with fairness constraints to a range of popular models, including random forests, boosting, and neural networks.
We show that our approach is competitive with specialized techniques that enforce model-specific fairness constraints.
arXiv Detail & Related papers (2020-06-09T08:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.