Middle-Aged Video Consumers' Beliefs About Algorithmic Recommendations
on YouTube
- URL: http://arxiv.org/abs/2008.03202v1
- Date: Fri, 7 Aug 2020 14:35:50 GMT
- Title: Middle-Aged Video Consumers' Beliefs About Algorithmic Recommendations
on YouTube
- Authors: Oscar Alvarado, Hendrik Heuer, Vero Vanden Abeele, Andreas Breiter,
Katrien Verbert
- Abstract summary: We conduct semi-structured interviews with middle-aged YouTube video consumers to analyze user beliefs about the video recommendation system.
We identify four groups of user beliefs: Previous Actions, Social Media, Recommender System, and Company Policy.
We propose a framework to distinguish the four main actors that users believe influence their video recommendations.
- Score: 2.8325478162326885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User beliefs about algorithmic systems are constantly co-produced through
user interaction and the complex socio-technical systems that generate
recommendations. Identifying these beliefs is crucial because they influence
how users interact with recommendation algorithms. With no prior work on user
beliefs of algorithmic video recommendations, practitioners lack relevant
knowledge to improve the user experience of such systems. To address this
problem, we conducted semi-structured interviews with middle-aged YouTube video
consumers to analyze their user beliefs about the video recommendation system.
Our analysis revealed different factors that users believe influence their
recommendations. Based on these factors, we identified four groups of user
beliefs: Previous Actions, Social Media, Recommender System, and Company
Policy. Additionally, we propose a framework to distinguish the four main
actors that users believe influence their video recommendations: the current
user, other users, the algorithm, and the organization. This framework provides
a new lens to explore design suggestions based on the agency of these four
actors. It also exposes a novel aspect previously unexplored: the effect of
corporate decisions on the interaction with algorithmic recommendations. While
we found that users are aware of the existence of the recommendation system on
YouTube, we show that their understanding of this system is limited.
Related papers
- Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems [3.990406494980651]
This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems.
By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration affect their recommendations.
arXiv Detail & Related papers (2024-09-10T23:58:27Z) - The MovieLens Beliefs Dataset: Collecting Pre-Choice Data for Online Recommender Systems [0.0]
This paper introduces a method for collecting user beliefs about unexperienced items - a critical predictor of choice behavior.
We implement this method on the MovieLens platform, resulting in a rich dataset that combines user ratings, beliefs, and observed recommendations.
arXiv Detail & Related papers (2024-05-17T19:06:06Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - PIE: Personalized Interest Exploration for Large-Scale Recommender
Systems [0.0]
We present a framework for exploration in large-scale recommender systems to address these challenges.
Our methodology can be easily integrated into an existing large-scale recommender system with minimal modifications.
Our work has been deployed in production on Facebook Watch, a popular video discovery and sharing platform serving billions of users.
arXiv Detail & Related papers (2023-04-13T22:25:09Z) - Editable User Profiles for Controllable Text Recommendation [66.00743968792275]
We propose LACE, a novel concept value bottleneck model for controllable text recommendations.
LACE represents each user with a succinct set of human-readable concepts.
It learns personalized representations of the concepts based on user documents.
arXiv Detail & Related papers (2023-04-09T14:52:18Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Causal Disentanglement with Network Information for Debiased
Recommendations [34.698181166037564]
Recent research proposes to debias by modeling a recommender system from a causal perspective.
The critical challenge in this setting is accounting for the hidden confounders.
We propose to leverage network information (i.e., user-social and user-item networks) to better approximate hidden confounders.
arXiv Detail & Related papers (2022-04-14T20:55:11Z) - YouTube, The Great Radicalizer? Auditing and Mitigating Ideological
Biases in YouTube Recommendations [20.145485714154933]
We conduct a systematic audit of YouTube's recommendation system using a hundred thousand sock puppets.
We find that YouTube's recommendations do direct users -- especially right-leaning users -- to ideologically biased and increasingly radical content.
Our intervention effectively mitigates the observed bias, leading to more recommendations to ideologically neutral, diverse, and dissimilar content.
arXiv Detail & Related papers (2022-03-20T22:45:56Z) - FEBR: Expert-Based Recommendation Framework for beneficial and
personalized content [77.86290991564829]
We propose FEBR (Expert-Based Recommendation Framework), an apprenticeship learning framework to assess the quality of the recommended content.
The framework exploits the demonstrated trajectories of an expert (assumed to be reliable) in a recommendation evaluation environment, to recover an unknown utility function.
We evaluate the performance of our solution through a user interest simulation environment (using RecSim)
arXiv Detail & Related papers (2021-07-17T18:21:31Z) - Survey for Trust-aware Recommender Systems: A Deep Learning Perspective [48.2733163413522]
It becomes critical to embrace a trustworthy recommender system.
This survey provides a systemic summary of three categories of trust-aware recommender systems.
arXiv Detail & Related papers (2020-04-08T02:11:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.