Uncovering the Deep Filter Bubble: Narrow Exposure in Short-Video
Recommendation
- URL: http://arxiv.org/abs/2403.04511v1
- Date: Thu, 7 Mar 2024 14:14:40 GMT
- Title: Uncovering the Deep Filter Bubble: Narrow Exposure in Short-Video
Recommendation
- Authors: Nicholas Sukiennik, Chen Gao, Nian Li
- Abstract summary: Filter bubbles have been studied extensively within the context of online content platforms.
With the rise of short-video platforms, the filter bubble has been given extra attention.
- Score: 30.395376392259497
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Filter bubbles have been studied extensively within the context of online
content platforms due to their potential to cause undesirable outcomes such as
user dissatisfaction or polarization. With the rise of short-video platforms,
the filter bubble has been given extra attention because these platforms rely
on an unprecedented use of the recommender system to provide relevant content.
In our work, we investigate the deep filter bubble, which refers to the user
being exposed to narrow content within their broad interests. We accomplish
this using one-year interaction data from a top short-video platform in China,
which includes hierarchical data with three levels of categories for each
video. We formalize our definition of a "deep" filter bubble within this
context, and then explore various correlations within the data: first
understanding the evolution of the deep filter bubble over time, and later
revealing some of the factors that give rise to this phenomenon, such as
specific categories, user demographics, and feedback type. We observe that
while the overall proportion of users in a filter bubble remains largely
constant over time, the depth composition of their filter bubble changes. In
addition, we find that some demographic groups that have a higher likelihood of
seeing narrower content and implicit feedback signals can lead to less bubble
formation. Finally, we propose some ways in which recommender systems can be
designed to reduce the risk of a user getting caught in a bubble.
Related papers
- Filter Bubble or Homogenization? Disentangling the Long-Term Effects of
Recommendations on User Consumption Patterns [4.197682068104959]
We develop a more refined definition of homogenization and the filter bubble effect by decomposing them into two key metrics.
We then use a novel agent-based simulation framework that enables a holistic view of the impact of recommendation systems on homogenization and filter bubble effects.
We introduce two new recommendation algorithms that take a more nuanced approach by accounting for both types of diversity.
arXiv Detail & Related papers (2024-02-22T23:12:20Z) - Filter Bubbles in Recommender Systems: Fact or Fallacy -- A Systematic
Review [7.121051191777698]
A filter bubble refers to the phenomenon where Internet customization effectively isolates individuals from diverse opinions or materials.
We conduct a systematic literature review on the topic of filter bubbles in recommender systems.
We propose mechanisms to mitigate the impact of filter bubbles and demonstrate that incorporating diversity into recommendations can potentially help alleviate this issue.
arXiv Detail & Related papers (2023-07-02T13:41:42Z) - Mitigating Filter Bubbles within Deep Recommender Systems [2.3590112541068575]
recommender systems have been known to intellectually isolate users from a variety of perspectives, or cause filter bubbles.
We characterize and mitigate this filter bubble effect by classifying various datapoints based on their user-item interaction history.
We mitigate this filter bubble effect without compromising accuracy by carefully retraining our recommender system.
arXiv Detail & Related papers (2022-09-16T22:00:10Z) - Modeling Content Creator Incentives on Algorithm-Curated Platforms [76.53541575455978]
We study how algorithmic choices affect the existence and character of (Nash) equilibria in exposure games.
We propose tools for numerically finding equilibria in exposure games, and illustrate results of an audit on the MovieLens and LastFM datasets.
arXiv Detail & Related papers (2022-06-27T08:16:59Z) - An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting
and Recent Behavior Changes [0.6094711396431726]
We present a study in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles.
Our key finding is that bursting of a filter bubble is possible, albeit it manifests differently from topic to topic.
Sadly, we did not find much improvements in misinformation occurrences, despite recent pledges by YouTube.
arXiv Detail & Related papers (2022-03-25T16:49:57Z) - Echo Chambers in Collaborative Filtering Based Recommendation Systems [1.5140493624413542]
We simulate the recommendations given by collaborative filtering algorithms on users in the MovieLens data set.
We find that prolonged exposure to system-generated recommendations substantially decreases content diversity.
Our work suggests that once these echo-chambers have been established, it is difficult for an individual user to break out by manipulating solely their own rating vector.
arXiv Detail & Related papers (2020-11-08T02:35:47Z) - Tied Block Convolution: Leaner and Better CNNs with Shared Thinner
Filters [50.10906063068743]
Convolution is the main building block of convolutional neural networks (CNN)
We propose Tied Block Convolution (TBC) that shares the same thinner filters over equal blocks of channels and produces multiple responses with a single filter.
arXiv Detail & Related papers (2020-09-25T03:58:40Z) - Training Interpretable Convolutional Neural Networks by Differentiating
Class-specific Filters [64.46270549587004]
Convolutional neural networks (CNNs) have been successfully used in a range of tasks.
CNNs are often viewed as "black-box" and lack of interpretability.
We propose a novel strategy to train interpretable CNNs by encouraging class-specific filters.
arXiv Detail & Related papers (2020-07-16T09:12:26Z) - Deep Learning feature selection to unhide demographic recommender
systems factors [63.732639864601914]
The matrix factorization model generates factors which do not incorporate semantic knowledge.
DeepUnHide is able to extract demographic information from the users and items factors in collaborative filtering recommender systems.
arXiv Detail & Related papers (2020-06-17T17:36:48Z) - Filter Grafting for Deep Neural Networks: Reason, Method, and
Cultivation [86.91324735966766]
Filter is the key component in modern convolutional neural networks (CNNs)
In this paper, we introduce filter grafting (textbfMethod) to achieve this goal.
We develop a novel criterion to measure the information of filters and an adaptive weighting strategy to balance the grafted information among networks.
arXiv Detail & Related papers (2020-04-26T08:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.