Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube
Thumbnails of Popular Videos
- URL: http://arxiv.org/abs/2002.00842v1
- Date: Mon, 27 Jan 2020 20:15:57 GMT
- Title: Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube
Thumbnails of Popular Videos
- Authors: Songyang Zhang, Tolga Aktas, Jiebo Luo
- Abstract summary: This study explores culture preferences among countries using the thumbnails of YouTube trending videos.
Experimental results indicate that the users from similar cultures shares interests in watching similar videos on YouTube.
- Score: 98.87558262467257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: YouTube, a world-famous video sharing website, maintains a list of the top
trending videos on the platform. Due to its huge amount of users, it enables
researchers to understand people's preference by analyzing the trending videos.
Trending videos vary from country to country. By analyzing such differences and
changes, we can tell how users' preferences differ over locations. Previous
work focuses on analyzing such culture preferences from videos' metadata, while
the culture information hidden within the visual content has not been
discovered. In this study, we explore culture preferences among countries using
the thumbnails of YouTube trending videos. We first process the thumbnail
images of the videos using object detectors. The collected object information
is then used for various statistical analysis. In particular, we examine the
data from three perspectives: geographical locations, video genres and users'
reactions. Experimental results indicate that the users from similar cultures
shares interests in watching similar videos on YouTube. Our study demonstrates
that discovering the culture preference through the thumbnails can be an
effective mechanism for video social media analysis.
Related papers
- HOTVCOM: Generating Buzzworthy Comments for Videos [49.39846630199698]
This study introduces textscHotVCom, the largest Chinese video hot-comment dataset, comprising 94k diverse videos and 137 million comments.
We also present the textttComHeat framework, which synergistically integrates visual, auditory, and textual data to generate influential hot-comments on the Chinese video dataset.
arXiv Detail & Related papers (2024-09-23T16:45:13Z) - Analyzing Political Figures in Real-Time: Leveraging YouTube Metadata
for Sentiment Analysis [0.0]
Sentiment analysis using big data from YouTube videos metadata can be conducted to analyze public opinions on various political figures.
This study aimed to build a sentiment analysis system leveraging YouTube videos metadata.
The sentiment analysis model was built using LSTM algorithm and produces two types of sentiments: positive and negative sentiments.
arXiv Detail & Related papers (2023-09-28T08:15:55Z) - Tube2Vec: Social and Semantic Embeddings of YouTube Channels [11.321096553990824]
We create embeddings that capture social sharing behavior, video metadata, and YouTube's video recommendations.
We evaluate these embeddings using crowdsourcing and existing datasets.
We share embeddings capturing the social and semantic dimensions of 44,000 YouTube channels for the benefit of future research.
arXiv Detail & Related papers (2023-06-29T20:43:57Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z) - YouTube and Science: Models for Research Impact [1.237556184089774]
We created new datasets using YouTube videos and mentions of research articles on various online platforms.
We analyzed these datasets through statistical techniques and visualization, and built machine learning models to predict whether a research article is cited in videos.
According to our results, research articles mentioned in more tweets and news coverage have a higher chance of receiving video citations.
arXiv Detail & Related papers (2022-09-01T19:25:38Z) - 3MASSIV: Multilingual, Multimodal and Multi-Aspect dataset of Social
Media Short Videos [72.69052180249598]
We present 3MASSIV, a multilingual, multimodal and multi-aspect, expertly-annotated dataset of diverse short videos extracted from short-video social media platform - Moj.
3MASSIV comprises of 50k short videos (20 seconds average duration) and 100K unlabeled videos in 11 different languages.
We show how the social media content in 3MASSIV is dynamic and temporal in nature, which can be used for semantic understanding tasks and cross-lingual analysis.
arXiv Detail & Related papers (2022-03-28T02:47:01Z) - Characterizing Abhorrent, Misinformative, and Mistargeted Content on
YouTube [1.9138099871648453]
We study the degree of problematic content on YouTube and the role of the recommendation algorithm in the dissemination of such content.
Our analysis reveals that young children are likely to encounter disturbing content when they randomly browse the platform.
We find that Incel activity is increasing over time and that platforms may play an active role in steering users towards extreme content.
arXiv Detail & Related papers (2021-05-20T15:10:48Z) - Less is More: ClipBERT for Video-and-Language Learning via Sparse
Sampling [98.41300980759577]
A canonical approach to video-and-language learning dictates a neural model to learn from offline-extracted dense video features.
We propose a generic framework ClipBERT that enables affordable end-to-end learning for video-and-language tasks.
Experiments on text-to-video retrieval and video question answering on six datasets demonstrate that ClipBERT outperforms existing methods.
arXiv Detail & Related papers (2021-02-11T18:50:16Z) - Content-based Analysis of the Cultural Differences between TikTok and
Douyin [95.32409577885645]
Short-form video social media shifts away from the traditional media paradigm by telling the audience a dynamic story to attract their attention.
In particular, different combinations of everyday objects can be employed to represent a unique scene that is both interesting and understandable.
Offered by the same company, TikTok and Douyin are popular examples of such new media that has become popular in recent years.
The hypothesis that they express cultural differences together with media fashion and social idiosyncrasy is the primary target of our research.
arXiv Detail & Related papers (2020-11-03T01:47:49Z) - Understanding YouTube Communities via Subscription-based Channel
Embeddings [0.0]
This paper presents new methods to discover and classify YouTube channels.
The methods use a self-supervised learning approach that leverages the public subscription pages of commenters.
We create a new dataset to analyze the amount of traffic going to different political content.
arXiv Detail & Related papers (2020-10-19T22:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.