How to Train Your YouTube Recommender to Avoid Unwanted Videos
- URL: http://arxiv.org/abs/2307.14551v3
- Date: Tue, 2 Apr 2024 06:18:23 GMT
- Title: How to Train Your YouTube Recommender to Avoid Unwanted Videos
- Authors: Alexander Liu, Siqi Wu, Paul Resnick,
- Abstract summary: "Not interested" and "Don't recommend channel" buttons allow users to indicate disinterest when presented with unwanted recommendations.
We simulated YouTube users with sock puppet agents.
We found that the "Not interested" button worked best, significantly reducing such recommendations in all topics tested.
- Score: 51.6864681332515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: YouTube provides features for users to indicate disinterest when presented with unwanted recommendations, such as the "Not interested" and "Don't recommend channel" buttons. These buttons purportedly allow the user to correct "mistakes" made by the recommendation system. Yet, relatively little is known about the empirical efficacy of these buttons. Neither is much known about users' awareness of and confidence in them. To address these gaps, we simulated YouTube users with sock puppet agents. Each agent first executed a "stain phase", where it watched many videos of an assigned topic; it then executed a "scrub phase", where it tried to remove recommendations from the assigned topic. Each agent repeatedly applied a single scrubbing strategy, either indicating disinterest in one of the videos visited in the stain phase (disliking it or deleting it from the watch history), or indicating disinterest in a video recommended on the homepage (clicking the "not interested" or "don't recommend channel" button or opening the video and clicking the dislike button). We found that the stain phase significantly increased the fraction of the recommended videos dedicated to the assigned topic on the user's homepage. For the scrub phase, using the "Not interested" button worked best, significantly reducing such recommendations in all topics tested, on average removing 88% of them. Neither the stain phase nor the scrub phase, however, had much effect on videopage recommendations. We also ran a survey (N = 300) asking adult YouTube users in the US whether they were aware of and used these buttons before, as well as how effective they found these buttons to be. We found that 44% of participants were not aware that the "Not interested" button existed. Those who were aware of it often used it to remove unwanted recommendations (82.8%) and found it to be modestly effective (3.42 out of 5).
Related papers
- Stereotype or Personalization? User Identity Biases Chatbot Recommendations [54.38329151781466]
We show that large language models (LLMs) produce recommendations that reflect both what the user wants and who the user is.
We find that models generate racially stereotypical recommendations regardless of whether the user revealed their identity intentionally.
Our experiments show that even though a user's revealed identity significantly influences model recommendations, model responses obfuscate this fact in response to user queries.
arXiv Detail & Related papers (2024-10-08T01:51:55Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - Auditing YouTube's Recommendation Algorithm for Misinformation Filter
Bubbles [0.5898451150401338]
We present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble.
We employ a sock puppet audit methodology, in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content.
Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation debunking content.
arXiv Detail & Related papers (2022-10-18T18:27:47Z) - YouTubers Not madeForKids: Detecting Channels Sharing Inappropriate
Videos Targeting Children [3.936965297430477]
We study YouTube channels found to post suitable or disturbing videos targeting kids in the past.
We identify a clear discrepancy between what YouTube assumes and flags as inappropriate content and channel, vs. what is found to be disturbing content and still available on the platform.
arXiv Detail & Related papers (2022-05-27T10:34:15Z) - Subscriptions and external links help drive resentful users to
alternative and extremist YouTube videos [7.945705756085774]
We show that exposure to alternative and extremist channel videos on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment.
Our findings suggest YouTube's algorithms were not sending people down "rabbit holes" during our observation window in 2020.
However, the platform continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.
arXiv Detail & Related papers (2022-04-22T20:22:06Z) - Characterizing Abhorrent, Misinformative, and Mistargeted Content on
YouTube [1.9138099871648453]
We study the degree of problematic content on YouTube and the role of the recommendation algorithm in the dissemination of such content.
Our analysis reveals that young children are likely to encounter disturbing content when they randomly browse the platform.
We find that Incel activity is increasing over time and that platforms may play an active role in steering users towards extreme content.
arXiv Detail & Related papers (2021-05-20T15:10:48Z) - "It is just a flu": Assessing the Effect of Watch History on YouTube's
Pseudoscientific Video Recommendations [13.936247103754905]
We collect 6.6K videos related to COVID-19, the Flat Earth theory, as well as the anti-vaccination and anti-mask movements.
Using crowdsourcing, we annotate them as pseudoscience, legitimate science, or irrelevant.
We quantify user exposure to this content on various parts of the platform and how this exposure changes based on the user's watch history.
arXiv Detail & Related papers (2020-10-22T12:20:01Z) - Learning Person Re-identification Models from Videos with Weak
Supervision [53.53606308822736]
We introduce the problem of learning person re-identification models from videos with weak supervision.
We propose a multiple instance attention learning framework for person re-identification using such video-level labels.
The attention weights are obtained based on all person images instead of person tracklets in a video, making our learned model less affected by noisy annotations.
arXiv Detail & Related papers (2020-07-21T07:23:32Z) - Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube
Thumbnails of Popular Videos [98.87558262467257]
This study explores culture preferences among countries using the thumbnails of YouTube trending videos.
Experimental results indicate that the users from similar cultures shares interests in watching similar videos on YouTube.
arXiv Detail & Related papers (2020-01-27T20:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.