Achieving Fairness via Post-Processing in Web-Scale Recommender Systems
- URL: http://arxiv.org/abs/2006.11350v3
- Date: Thu, 11 Aug 2022 06:42:18 GMT
- Title: Achieving Fairness via Post-Processing in Web-Scale Recommender Systems
- Authors: Preetam Nandy, Cyrus Diciccio, Divya Venugopalan, Heloise Logan,
Kinjal Basu, Noureddine El Karoui
- Abstract summary: We extend the definitions of fairness to recommender systems, namely equality of opportunity and equalized odds.
We propose scalable methods for achieving equality of opportunity and equalized odds in rankings in the presence of position bias.
- Score: 6.5191290612443105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building fair recommender systems is a challenging and crucial area of study
due to its immense impact on society. We extended the definitions of two
commonly accepted notions of fairness to recommender systems, namely equality
of opportunity and equalized odds. These fairness measures ensure that equally
"qualified" (or "unqualified") candidates are treated equally regardless of
their protected attribute status (such as gender or race). We propose scalable
methods for achieving equality of opportunity and equalized odds in rankings in
the presence of position bias, which commonly plagues data generated from
recommender systems. Our algorithms are model agnostic in the sense that they
depend only on the final scores provided by a model, making them easily
applicable to virtually all web-scale recommender systems. We conduct extensive
simulations as well as real-world experiments to show the efficacy of our
approach.
Related papers
- Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Equal Experience in Recommender Systems [21.298427869586686]
We introduce a novel fairness notion (that we call equal experience) to regulate unfairness in the presence of biased data.
We propose an optimization framework that incorporates the fairness notion as a regularization term, as well as introduce computationally-efficient algorithms that solve the optimization.
arXiv Detail & Related papers (2022-10-12T05:53:05Z) - Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making [35.21763166288736]
We propose a general framework to create data-driven fairness-aware scoring systems.
We show that the proposed framework provides practitioners or policymakers great flexibility to select their desired fairness requirements.
arXiv Detail & Related papers (2021-09-21T09:46:35Z) - "And the Winner Is...": Dynamic Lotteries for Multi-group Fairness-Aware
Recommendation [37.35485045640196]
We argue that the previous literature has been based on simple, uniform and often uni-dimensional notions of fairness assumptions.
We explicitly represent the design decisions that enter into the trade-off between accuracy and fairness across multiply-defined and intersecting protected groups.
We formulate lottery-based mechanisms for choosing between fairness concerns, and demonstrate their performance in two recommendation domains.
arXiv Detail & Related papers (2020-09-05T20:15:14Z) - HyperFair: A Soft Approach to Integrating Fairness Criteria [17.770533330914102]
We introduce HyperFair, a framework for enforcing soft fairness constraints in a hybrid recommender system.
We propose two ways to employ the methods we introduce: first as an extension of a probabilistic soft logic recommender system template.
We empirically validate our approach by implementing multiple HyperFair hybrid recommenders and compare them to a state-of-the-art fair recommender.
arXiv Detail & Related papers (2020-09-05T05:00:06Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Opportunistic Multi-aspect Fairness through Personalized Re-ranking [5.8562079474220665]
We present a re-ranking approach to fairness-aware recommendation that learns individual preferences across multiple fairness dimensions.
We show that our opportunistic and metric-agnostic approach achieves a better trade-off between accuracy and fairness than prior re-ranking approaches.
arXiv Detail & Related papers (2020-05-21T04:25:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.