Can Probabilistic Feedback Drive User Impacts in Online Platforms?
- URL: http://arxiv.org/abs/2401.05304v2
- Date: Thu, 25 Jan 2024 05:14:26 GMT
- Title: Can Probabilistic Feedback Drive User Impacts in Online Platforms?
- Authors: Jessica Dai, Bailey Flanigan, Nika Haghtalab, Meena Jagadeesan, Chara
Podimata
- Abstract summary: A common explanation for negative user impacts of content recommender systems is misalignment between the platform's objective and user welfare.
In this work, we show that misalignment in the platform's objective is not the only potential cause of unintended impacts on users.
The source of these user impacts is that different pieces of content may generate observable user reactions (feedback information) at different rates.
- Score: 26.052963782865294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common explanation for negative user impacts of content recommender systems
is misalignment between the platform's objective and user welfare. In this
work, we show that misalignment in the platform's objective is not the only
potential cause of unintended impacts on users: even when the platform's
objective is fully aligned with user welfare, the platform's learning algorithm
can induce negative downstream impacts on users. The source of these user
impacts is that different pieces of content may generate observable user
reactions (feedback information) at different rates; these feedback rates may
correlate with content properties, such as controversiality or demographic
similarity of the creator, that affect the user experience. Since differences
in feedback rates can impact how often the learning algorithm engages with
different content, the learning algorithm may inadvertently promote content
with certain such properties. Using the multi-armed bandit framework with
probabilistic feedback, we examine the relationship between feedback rates and
a learning algorithm's engagement with individual arms for different no-regret
algorithms. We prove that no-regret algorithms can exhibit a wide range of
dependencies: if the feedback rate of an arm increases, some no-regret
algorithms engage with the arm more, some no-regret algorithms engage with the
arm less, and other no-regret algorithms engage with the arm approximately the
same number of times. From a platform design perspective, our results highlight
the importance of looking beyond regret when measuring an algorithm's
performance, and assessing the nature of a learning algorithm's engagement with
different types of content as well as their resulting downstream impacts.
Related papers
- Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems [3.990406494980651]
This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems.
By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration affect their recommendations.
arXiv Detail & Related papers (2024-09-10T23:58:27Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - User Strategization and Trustworthy Algorithms [81.82279667028423]
We show that user strategization can actually help platforms in the short term.
We then show that it corrupts platforms' data and ultimately hurts their ability to make counterfactual decisions.
arXiv Detail & Related papers (2023-12-29T16:09:42Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Modeling Content Creator Incentives on Algorithm-Curated Platforms [76.53541575455978]
We study how algorithmic choices affect the existence and character of (Nash) equilibria in exposure games.
We propose tools for numerically finding equilibria in exposure games, and illustrate results of an audit on the MovieLens and LastFM datasets.
arXiv Detail & Related papers (2022-06-27T08:16:59Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Interpretable Assessment of Fairness During Model Evaluation [1.2183405753834562]
We introduce a novel hierarchical clustering algorithm to detect heterogeneity among users in given sets of sub-populations.
We demonstrate the performance of the algorithm on real data from LinkedIn.
arXiv Detail & Related papers (2020-10-26T02:31:17Z) - Examining the Impact of Algorithm Awareness on Wikidata's Recommender
System Recoin [12.167153941840958]
We conduct online experiments with 105 participants using MTurk for the recommender system Recoin, a gadget for Wikidata.
Our findings include a positive correlation between comprehension of and trust in an algorithmic system in our interactive redesign.
Our results are not conclusive yet, and suggest that the measures of comprehension, fairness, accuracy and trust are not yet exhaustive for the empirical study of algorithm awareness.
arXiv Detail & Related papers (2020-09-18T20:06:53Z) - Partial Bandit and Semi-Bandit: Making the Most Out of Scarce Users'
Feedback [62.997667081978825]
We present a novel approach for considering user feedback and evaluate it using three distinct strategies.
Despite a limited number of feedbacks returned by users (as low as 20% of the total), our approach obtains similar results to those of state of the art approaches.
arXiv Detail & Related papers (2020-09-16T07:32:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.