Dynamic fairness-aware recommendation through multi-agent social choice
- URL: http://arxiv.org/abs/2303.00968v3
- Date: Tue, 27 Feb 2024 18:44:19 GMT
- Title: Dynamic fairness-aware recommendation through multi-agent social choice
- Authors: Amanda Aird, Paresha Farastu, Joshua Sun, Elena \v{S}tefancov\'a,
Cassidy All, Amy Voida, Nicholas Mattei, Robin Burke
- Abstract summary: We argue that fairness in real-world application settings in general, and especially in the context of personalized recommendation, is much more complex and multi-faceted.
We propose a model to formalize multistakeholder fairness in recommender systems as a two stage social choice problem.
- Score: 10.556124653827647
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Algorithmic fairness in the context of personalized recommendation presents
significantly different challenges to those commonly encountered in
classification tasks. Researchers studying classification have generally
considered fairness to be a matter of achieving equality of outcomes between a
protected and unprotected group, and built algorithmic interventions on this
basis. We argue that fairness in real-world application settings in general,
and especially in the context of personalized recommendation, is much more
complex and multi-faceted, requiring a more general approach. We propose a
model to formalize multistakeholder fairness in recommender systems as a two
stage social choice problem. In particular, we express recommendation fairness
as a novel combination of an allocation and an aggregation problem, which
integrate both fairness concerns and personalized recommendation provisions,
and derive new recommendation techniques based on this formulation. Simulations
demonstrate the ability of the framework to integrate multiple fairness
concerns in a dynamic way.
Related papers
- Social Choice for Heterogeneous Fairness in Recommendation [9.753088666705985]
Algorithmic fairness in recommender systems requires close attention to the needs of a diverse set of stakeholders.
Previous work has often been limited by fixed, single-objective definitions of fairness.
Our work approaches recommendation fairness from the standpoint of computational social choice.
arXiv Detail & Related papers (2024-10-06T17:01:18Z) - Exploring Social Choice Mechanisms for Recommendation Fairness in SCRUF [11.43931298398417]
A social choice formulation of the fairness problem offers a flexible and multi-aspect alternative to fairness-aware recommendations.
We show that different classes of choice and allocation mechanisms yield different but consistent fairness / accuracy tradeoffs.
arXiv Detail & Related papers (2023-09-10T17:47:21Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Normative Disagreement as a Challenge for Cooperative AI [56.34005280792013]
We argue that typical cooperation-inducing learning algorithms fail to cooperate in bargaining problems.
We develop a class of norm-adaptive policies and show in experiments that these significantly increase cooperation.
arXiv Detail & Related papers (2021-11-27T11:37:42Z) - Choosing the Best of Both Worlds: Diverse and Novel Recommendations
through Multi-Objective Reinforcement Learning [68.45370492516531]
We introduce Scalarized Multi-Objective Reinforcement Learning (SMORL) for the Recommender Systems (RS) setting.
SMORL agent augments standard recommendation models with additional RL layers that enforce it to simultaneously satisfy three principal objectives: accuracy, diversity, and novelty of recommendations.
Our experimental results on two real-world datasets reveal a substantial increase in aggregate diversity, a moderate increase in accuracy, reduced repetitiveness of recommendations, and demonstrate the importance of reinforcing diversity and novelty as complementary objectives.
arXiv Detail & Related papers (2021-10-28T13:22:45Z) - Instance-Dependent Complexity of Contextual Bandits and Reinforcement
Learning: A Disagreement-Based Perspective [104.67295710363679]
In the classical multi-armed bandit problem, instance-dependent algorithms attain improved performance on "easy" problems with a gap between the best and second-best arm.
We introduce a family of complexity measures that are both sufficient and necessary to obtain instance-dependent regret bounds.
We then introduce new oracle-efficient algorithms which adapt to the gap whenever possible, while also attaining the minimax rate in the worst case.
arXiv Detail & Related papers (2020-10-07T01:33:06Z) - Simultaneous Relevance and Diversity: A New Recommendation Inference
Approach [81.44167398308979]
We propose a new approach, which extends the general collaborative filtering (CF) by introducing a new way of CF inference, negative-to-positive.
Our approach is applicable to a wide range of recommendation scenarios/use-cases at various sophistication levels.
Our analysis and experiments on public datasets and real-world production data show that our approach outperforms existing methods on relevance and diversity simultaneously.
arXiv Detail & Related papers (2020-09-27T22:20:12Z) - "And the Winner Is...": Dynamic Lotteries for Multi-group Fairness-Aware
Recommendation [37.35485045640196]
We argue that the previous literature has been based on simple, uniform and often uni-dimensional notions of fairness assumptions.
We explicitly represent the design decisions that enter into the trade-off between accuracy and fairness across multiply-defined and intersecting protected groups.
We formulate lottery-based mechanisms for choosing between fairness concerns, and demonstrate their performance in two recommendation domains.
arXiv Detail & Related papers (2020-09-05T20:15:14Z) - HyperFair: A Soft Approach to Integrating Fairness Criteria [17.770533330914102]
We introduce HyperFair, a framework for enforcing soft fairness constraints in a hybrid recommender system.
We propose two ways to employ the methods we introduce: first as an extension of a probabilistic soft logic recommender system template.
We empirically validate our approach by implementing multiple HyperFair hybrid recommenders and compare them to a state-of-the-art fair recommender.
arXiv Detail & Related papers (2020-09-05T05:00:06Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.