Two-Sided Fairness in Non-Personalised Recommendations
- URL: http://arxiv.org/abs/2011.05287v1
- Date: Tue, 10 Nov 2020 18:11:37 GMT
- Title: Two-Sided Fairness in Non-Personalised Recommendations
- Authors: Aadi Swadipto Mondal and Rakesh Bal and Sayan Sinha, Gourab K Patro
- Abstract summary: We discuss on two specific fairness concerns together (traditionally studied separately)---user fairness and organisational fairness.
For user fairness, we test with methods from social choice theory, i.e., various voting rules known to better represent user choices in their results.
Analysing the results obtained from voting rule-based recommendation, we find that while the well-known voting rules are better from the user side, they show high bias values.
- Score: 6.403167095324894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems are one of the most widely used services on several
online platforms to suggest potential items to the end-users. These services
often use different machine learning techniques for which fairness is a
concerning factor, especially when the downstream services have the ability to
cause social ramifications. Thus, focusing on the non-personalised (global)
recommendations in news media platforms (e.g., top-k trending topics on
Twitter, top-k news on a news platform, etc.), we discuss on two specific
fairness concerns together (traditionally studied separately)---user fairness
and organisational fairness. While user fairness captures the idea of
representing the choices of all the individual users in the case of global
recommendations, organisational fairness tries to ensure
politically/ideologically balanced recommendation sets. This makes user
fairness a user-side requirement and organisational fairness a platform-side
requirement. For user fairness, we test with methods from social choice theory,
i.e., various voting rules known to better represent user choices in their
results. Even in our application of voting rules to the recommendation setup,
we observe high user satisfaction scores. Now for organisational fairness, we
propose a bias metric which measures the aggregate ideological bias of a
recommended set of items (articles). Analysing the results obtained from voting
rule-based recommendation, we find that while the well-known voting rules are
better from the user side, they show high bias values and clearly not suitable
for organisational requirements of the platforms. Thus, there is a need to
build an encompassing mechanism by cohesively bridging ideas of user fairness
and organisational fairness. In this abstract paper, we intend to frame the
elementary ideas along with the clear motivation behind the requirement of such
a mechanism.
Related papers
- Why Multi-Interest Fairness Matters: Hypergraph Contrastive Multi-Interest Learning for Fair Conversational Recommender System [55.39026603611269]
We propose a novel framework, Hypergraph Contrastive Multi-Interest Learning for Fair Conversational Recommender System (HyFairCRS)<n>HyFairCRS aims to promote multi-interest diversity fairness in dynamic and interactive Conversational Recommender Systems (CRSs)<n> Experiments on two CRS-based datasets show that HyFairCRS achieves a new state-of-the-art performance while effectively alleviating unfairness.
arXiv Detail & Related papers (2025-07-01T11:39:42Z) - User-item fairness tradeoffs in recommendations [0.7490658564954134]
We develop a model of recommendations with user and item fairness objectives.
We identify two phenomena: (a) when user preferences are diverse, there is "free" item and user fairness; and (b) users whose preferences are misestimated can be especially disadvantaged by item fairness constraints.
arXiv Detail & Related papers (2024-12-05T18:59:51Z) - Bypassing the Popularity Bias: Repurposing Models for Better Long-Tail Recommendation [0.0]
We aim to achieve a more equitable distribution of exposure among publishers on an online content platform.
We propose a novel approach of repurposing existing components of an industrial recommender system to deliver valuable exposure to underrepresented publishers.
arXiv Detail & Related papers (2024-09-17T15:40:55Z) - User Welfare Optimization in Recommender Systems with Competing Content Creators [65.25721571688369]
In this study, we perform system-side user welfare optimization under a competitive game setting among content creators.
We propose an algorithmic solution for the platform, which dynamically computes a sequence of weights for each user based on their satisfaction of the recommended content.
These weights are then utilized to design mechanisms that adjust the recommendation policy or the post-recommendation rewards, thereby influencing creators' content production strategies.
arXiv Detail & Related papers (2024-04-28T21:09:52Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Experiments on Generalizability of User-Oriented Fairness in Recommender
Systems [2.0932879442844476]
A fairness-aware recommender system aims to treat different user groups similarly.
We propose a user-centered fairness re-ranking framework applied on top of a base ranking model.
We evaluate the final recommendations provided by the re-ranking framework from both user- (e.g., NDCG) and item-side (e.g., novelty, item-fairness) metrics.
arXiv Detail & Related papers (2022-05-17T12:36:30Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Balancing Accuracy and Fairness for Interactive Recommendation with
Reinforcement Learning [68.25805655688876]
Fairness in recommendation has attracted increasing attention due to bias and discrimination possibly caused by traditional recommenders.
We propose a reinforcement learning based framework, FairRec, to dynamically maintain a long-term balance between accuracy and fairness in IRS.
Extensive experiments validate that FairRec can improve fairness, while preserving good recommendation quality.
arXiv Detail & Related papers (2021-06-25T02:02:51Z) - Towards Personalized Fairness based on Causal Notion [18.5897206797918]
We introduce a framework for achieving counterfactually fair recommendations through adversary learning.
Our method can generate fairer recommendations for users with a desirable recommendation performance.
arXiv Detail & Related papers (2021-05-20T15:24:34Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Exploring User Opinions of Fairness in Recommender Systems [13.749884072907163]
We ask users what their ideas of fair treatment in recommendation might be.
We analyze what might cause discrepancies or changes between user's opinions towards fairness.
arXiv Detail & Related papers (2020-03-13T19:44:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.