Multilingual, Temporal and Sentimental Distant-Reading of City Events
- URL: http://arxiv.org/abs/2102.09350v1
- Date: Mon, 4 Jan 2021 10:57:11 GMT
- Title: Multilingual, Temporal and Sentimental Distant-Reading of City Events
- Authors: Mehmet Can Yavuz
- Abstract summary: This analysis aims to apply distant reading on Berlinale tweets collected during the festival.
We trained a deep sentiment network with multilingual embeddings.
The trained algorithm has a 0.78 test score and applied on Tweets with Berlinale hashtag during the festival.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Leibniz's Monadology mentions perceptional and sentimental variations of the
individual in the city. It is the interaction of people with people and events.
Film festivals are highly sentimental events of multicultural cities. Each
movie has a different sentimental effect and the interactions with the movies
have reflections that can be observed on social media. This analysis aims to
apply distant reading on Berlinale tweets collected during the festival. On
contrary to close reading, distant reading let authors to observe patterns in
large collection of data. The analysis is temporal and sentimental in
multilingual domain and strongly positive and negative time intervals are
analysed. For this purpose, we trained a deep sentiment network with
multilingual embeddings. These multilingual embeddings are aligned in latent
space. We trained the network with a multilingual dataset in three languages
English, German and Spanish. The trained algorithm has a 0.78 test score and
applied on Tweets with Berlinale hashtag during the festival. Although the
sentimental analysis does not reflect the award-winning films, we observe
weekly routine on the relationship between sentimentality, which can mislead a
close reading analysis. We have also remarks on popularity of the director or
actors.
Related papers
- Large Language Models Meet Text-Centric Multimodal Sentiment Analysis: A Survey [66.166184609616]
ChatGPT has opened up immense potential for applying large language models (LLMs) to text-centric multimodal tasks.
It is still unclear how existing LLMs can adapt better to text-centric multimodal sentiment analysis tasks.
arXiv Detail & Related papers (2024-06-12T10:36:27Z) - Comparing Biases and the Impact of Multilingual Training across Multiple
Languages [70.84047257764405]
We present a bias analysis across Italian, Chinese, English, Hebrew, and Spanish on the downstream sentiment analysis task.
We adapt existing sentiment bias templates in English to Italian, Chinese, Hebrew, and Spanish for four attributes: race, religion, nationality, and gender.
Our results reveal similarities in bias expression such as favoritism of groups that are dominant in each language's culture.
arXiv Detail & Related papers (2023-05-18T18:15:07Z) - How you feelin'? Learning Emotions and Mental States in Movie Scenes [9.368590075496149]
We formulate emotion understanding as predicting a diverse and multi-label set of emotions at the level of a movie scene.
EmoTx is a multimodal Transformer-based architecture that ingests videos, multiple characters, and dialog utterances to make joint predictions.
arXiv Detail & Related papers (2023-04-12T06:31:14Z) - yosm: A new yoruba sentiment corpus for movie reviews [2.3513645401551337]
We explore sentiment analysis on reviews of Nigerian movies.
The data comprised 1500 movie reviews that were sourced from IMDB, Rotten Tomatoes, Letterboxd, Cinemapointer and Nollyrated.
We develop sentiment classification models using the state-of-the-art pre-trained language models like mBERT and AfriBERTa.
arXiv Detail & Related papers (2022-04-20T18:00:37Z) - 3MASSIV: Multilingual, Multimodal and Multi-Aspect dataset of Social
Media Short Videos [72.69052180249598]
We present 3MASSIV, a multilingual, multimodal and multi-aspect, expertly-annotated dataset of diverse short videos extracted from short-video social media platform - Moj.
3MASSIV comprises of 50k short videos (20 seconds average duration) and 100K unlabeled videos in 11 different languages.
We show how the social media content in 3MASSIV is dynamic and temporal in nature, which can be used for semantic understanding tasks and cross-lingual analysis.
arXiv Detail & Related papers (2022-03-28T02:47:01Z) - Affect2MM: Affective Analysis of Multimedia Content Using Emotion
Causality [84.69595956853908]
We present Affect2MM, a learning method for time-series emotion prediction for multimedia content.
Our goal is to automatically capture the varying emotions depicted by characters in real-life human-centric situations and behaviors.
arXiv Detail & Related papers (2021-03-11T09:07:25Z) - Content-based Analysis of the Cultural Differences between TikTok and
Douyin [95.32409577885645]
Short-form video social media shifts away from the traditional media paradigm by telling the audience a dynamic story to attract their attention.
In particular, different combinations of everyday objects can be employed to represent a unique scene that is both interesting and understandable.
Offered by the same company, TikTok and Douyin are popular examples of such new media that has become popular in recent years.
The hypothesis that they express cultural differences together with media fashion and social idiosyncrasy is the primary target of our research.
arXiv Detail & Related papers (2020-11-03T01:47:49Z) - Multilingual Contextual Affective Analysis of LGBT People Portrayals in
Wikipedia [34.183132688084534]
Specific lexical choices in narrative text reflect both the writer's attitudes towards people in the narrative and influence the audience's reactions.
We show how word connotations differ across languages and cultures, highlighting the difficulty of generalizing existing English datasets.
We then demonstrate the usefulness of our method by analyzing Wikipedia biography pages of members of the LGBT community across three languages.
arXiv Detail & Related papers (2020-10-21T08:27:36Z) - Visual Sentiment Analysis from Disaster Images in Social Media [11.075683976162766]
This article focuses on visual sentiment analysis in a societal important domain, namely disaster analysis in social media.
We propose a deep visual sentiment analyzer for disaster related images, covering different aspects of visual sentiment analysis.
We believe the proposed system can contribute toward more livable communities by helping different stakeholders.
arXiv Detail & Related papers (2020-09-04T11:29:52Z) - Vyaktitv: A Multimodal Peer-to-Peer Hindi Conversations based Dataset
for Personality Assessment [50.15466026089435]
We present a novel peer-to-peer Hindi conversation dataset- Vyaktitv.
It consists of high-quality audio and video recordings of the participants, with Hinglish textual transcriptions for each conversation.
The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants.
arXiv Detail & Related papers (2020-08-31T17:44:28Z) - Characterising User Content on a Multi-lingual Social Network [9.13241181020543]
We present our characterisation of a multilingual social network in India called ShareChat.
We collect an exhaustive dataset across 72 weeks before and during the Indian general elections of 2019 across 14 languages.
We find that Telugu, Malayalam, Tamil and Kannada languages tend to be dominant in soliciting political images.
arXiv Detail & Related papers (2020-04-23T22:25:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.