Fairness Dynamics in Digital Economy Platforms with Biased Ratings
- URL: http://arxiv.org/abs/2602.16695v1
- Date: Wed, 18 Feb 2026 18:41:16 GMT
- Title: Fairness Dynamics in Digital Economy Platforms with Biased Ratings
- Authors: J. Martin Smit, Fernando P. Santos,
- Abstract summary: We study how digital platforms can perpetuate or counteract rating-based discrimination.<n>Our results demonstrate a fundamental trade-off between user experience and fairness.<n>Our results also provide evidence that intervening by tuning the demographics of the search results is a highly effective way of reducing unfairness.
- Score: 50.29721091981893
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The digital services economy consists of online platforms that facilitate interactions between service providers and consumers. This ecosystem is characterized by short-term, often one-off, transactions between parties that have no prior familiarity. To establish trust among users, platforms employ rating systems which allow users to report on the quality of their previous interactions. However, while arguably crucial for these platforms to function, rating systems can perpetuate negative biases against marginalised groups. This paper investigates how to design platforms around biased reputation systems, reducing discrimination while maintaining incentives for all service providers to offer high quality service for users. We introduce an evolutionary game theoretical model to study how digital platforms can perpetuate or counteract rating-based discrimination. We focus on the platforms' decisions to promote service providers who have high reputations or who belong to a specific protected group. Our results demonstrate a fundamental trade-off between user experience and fairness: promoting highly-rated providers benefits users, but lowers the demand for marginalised providers against which the ratings are biased. Our results also provide evidence that intervening by tuning the demographics of the search results is a highly effective way of reducing unfairness while minimally impacting users. Furthermore, we show that even when precise measurements on the level of rating bias affecting marginalised service providers is unavailable, there is still potential to improve upon a recommender system which ignores protected characteristics. Altogether, our model highlights the benefits of proactive anti-discrimination design in systems where ratings are used to promote cooperative behaviour.
Related papers
- Reducing Popularity Influence by Addressing Position Bias [0.0]
We show that position debiasing can effectively reduce a skew in the popularity of items induced by the position bias through a feedback loop.<n>We show that position debiasing can significantly improve assortment utilization, without any degradation in user engagement or financial metrics.
arXiv Detail & Related papers (2024-12-11T21:16:37Z) - Interpolating Item and User Fairness in Multi-Sided Recommendations [13.635310806431198]
We introduce a novel fair recommendation framework, Problem (FAIR)
We propose a low-regret algorithm FORM that concurrently performs real-time learning and fair recommendations, two tasks that are often at odds.
We demonstrate the efficacy of our framework and method in maintaining platform revenue while ensuring desired levels of fairness for both items and users.
arXiv Detail & Related papers (2023-06-12T15:00:58Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Consumer-side Fairness in Recommender Systems: A Systematic Survey of
Methods and Evaluation [1.4123323039043334]
Growing awareness of discrimination in machine learning methods motivated both academia and industry to research how fairness can be ensured in recommender systems.
For recommender systems, such issues are well exemplified by occupation recommendation, where biases in historical data may lead to recommender systems relating one gender to lower wages or to the propagation of stereotypes.
This survey serves as a systematic overview and discussion of the current research on consumer-side fairness in recommender systems.
arXiv Detail & Related papers (2023-05-16T10:07:41Z) - FedGRec: Federated Graph Recommender System with Lazy Update of Latent
Embeddings [108.77460689459247]
We propose a Federated Graph Recommender System (FedGRec) to mitigate privacy concerns.
In our system, users and the server explicitly store latent embeddings for users and items, where the latent embeddings summarize different orders of indirect user-item interactions.
We perform extensive empirical evaluations to verify the efficacy of using latent embeddings as a proxy of missing interaction graph.
arXiv Detail & Related papers (2022-10-25T01:08:20Z) - Who Pays? Personalization, Bossiness and the Cost of Fairness [24.75616876832476]
Fairness-aware recommender systems that have a provider-side fairness concern seek to ensure that protected group(s) of providers have a fair opportunity to promote their items or products.
There is a cost of fairness'' borne by the consumer side of the interaction when such a solution is implemented.
This position paper introduces the concept of bossiness, shows its application in fairness-aware recommendation and discusses strategies for reducing this strategic incentive.
arXiv Detail & Related papers (2022-09-08T21:47:10Z) - Competition, Alignment, and Equilibria in Digital Marketplaces [97.03797129675951]
We study a duopoly market where platform actions are bandit algorithms and the two platforms compete for user participation.
Our main finding is that competition in this market does not perfectly align market outcomes with user utility.
arXiv Detail & Related papers (2022-08-30T17:43:58Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Two-Sided Fairness in Non-Personalised Recommendations [6.403167095324894]
We discuss on two specific fairness concerns together (traditionally studied separately)---user fairness and organisational fairness.
For user fairness, we test with methods from social choice theory, i.e., various voting rules known to better represent user choices in their results.
Analysing the results obtained from voting rule-based recommendation, we find that while the well-known voting rules are better from the user side, they show high bias values.
arXiv Detail & Related papers (2020-11-10T18:11:37Z) - On the Identification of Fair Auditors to Evaluate Recommender Systems
based on a Novel Non-Comparative Fairness Notion [1.116812194101501]
Decision-support systems have been found to be discriminatory in the context of many practical deployments.
We propose a new fairness notion based on the principle of non-comparative justice.
We show that the proposed fairness notion also provides guarantees in terms of comparative fairness notions.
arXiv Detail & Related papers (2020-09-09T16:04:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.