Detecting Personality and Emotion Traits in Crowds from Video Sequences
- URL: http://arxiv.org/abs/2104.12927v1
- Date: Tue, 27 Apr 2021 01:00:16 GMT
- Title: Detecting Personality and Emotion Traits in Crowds from Video Sequences
- Authors: Rodolfo Migon Favaretto, Paulo Knob, Soraia Raupp Musse, Felipe
Vilanova, \^Angelo Brandelli Costa
- Abstract summary: This paper presents a methodology to detect personality and basic emotion characteristics of crowds in video sequences.
First, individuals are detected and tracked, then groups are recognized and characterized.
This information is then mapped to OCEAN dimensions, used to find out personality and emotion in videos.
- Score: 0.7829352305480283
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a methodology to detect personality and basic emotion
characteristics of crowds in video sequences. Firstly, individuals are detected
and tracked, then groups are recognized and characterized. Such information is
then mapped to OCEAN dimensions, used to find out personality and emotion in
videos, based on OCC emotion models. Although it is a clear challenge to
validate our results with real life experiments, we evaluate our method with
the available literature information regarding OCEAN values of different
Countries and also emergent Personal distance among people. Hence, such
analysis refer to cultural differences of each country too. Our results
indicate that this model generates coherent information when compared to data
provided in available literature, as shown in qualitative and quantitative
results.
Related papers
- Personality Analysis for Social Media Users using Arabic language and its Effect on Sentiment Analysis [1.2903829793534267]
This study, explores the correlation between the use of Arabic language on twitter, personality traits and its impact on sentiment analysis.
We indicated the personality traits of users based on the information extracted from their profile activities, and the content of their tweets.
Our findings demonstrated that personality affect sentiment in social media.
arXiv Detail & Related papers (2024-07-08T18:27:54Z) - Vision-Language Models under Cultural and Inclusive Considerations [53.614528867159706]
Large vision-language models (VLMs) can assist visually impaired people by describing images from their daily lives.
Current evaluation datasets may not reflect diverse cultural user backgrounds or the situational context of this use case.
We create a survey to determine caption preferences and propose a culture-centric evaluation benchmark by filtering VizWiz, an existing dataset with images taken by people who are blind.
We then evaluate several VLMs, investigating their reliability as visual assistants in a culturally diverse setting.
arXiv Detail & Related papers (2024-07-08T17:50:00Z) - Towards Geographic Inclusion in the Evaluation of Text-to-Image Models [25.780536950323683]
We study how much annotators in Africa, Europe, and Southeast Asia vary in their perception of geographic representation, visual appeal, and consistency in real and generated images.
For example, annotators in different locations often disagree on whether exaggerated, stereotypical depictions of a region are considered geographically representative.
We recommend steps for improved automatic and human evaluations.
arXiv Detail & Related papers (2024-05-07T16:23:06Z) - Construction and Evaluation of Mandarin Multimodal Emotional Speech
Database [0.0]
The validity of dimension annotation is verified by statistical analysis of dimension annotation data.
The recognition rate of seven emotions is about 82% when using acoustic data alone.
The database is of high quality and can be used as an important source for speech analysis research.
arXiv Detail & Related papers (2024-01-14T17:56:36Z) - Seeking Subjectivity in Visual Emotion Distribution Learning [93.96205258496697]
Visual Emotion Analysis (VEA) aims to predict people's emotions towards different visual stimuli.
Existing methods often predict visual emotion distribution in a unified network, neglecting the inherent subjectivity in its crowd voting process.
We propose a novel textitSubjectivity Appraise-and-Match Network (SAMNet) to investigate the subjectivity in visual emotion distribution.
arXiv Detail & Related papers (2022-07-25T02:20:03Z) - BERTHA: Video Captioning Evaluation Via Transfer-Learned Human
Assessment [16.57721566105298]
This paper presents a new method based on a deep learning model to evaluate video captioning systems.
The model is based on BERT, which is a language model that has been shown to work well in multiple NLP tasks.
The aim is for the model to learn to perform an evaluation similar to that of a human.
arXiv Detail & Related papers (2022-01-25T11:29:58Z) - Affective Image Content Analysis: Two Decades Review and New
Perspectives [132.889649256384]
We will comprehensively review the development of affective image content analysis (AICA) in the recent two decades.
We will focus on the state-of-the-art methods with respect to three main challenges -- the affective gap, perception subjectivity, and label noise and absence.
We discuss some challenges and promising research directions in the future, such as image content and context understanding, group emotion clustering, and viewer-image interaction.
arXiv Detail & Related papers (2021-06-30T15:20:56Z) - Affect2MM: Affective Analysis of Multimedia Content Using Emotion
Causality [84.69595956853908]
We present Affect2MM, a learning method for time-series emotion prediction for multimedia content.
Our goal is to automatically capture the varying emotions depicted by characters in real-life human-centric situations and behaviors.
arXiv Detail & Related papers (2021-03-11T09:07:25Z) - Investigating Cultural Aspects in the Fundamental Diagram using
Convolutional Neural Networks and Simulation [0.0]
This paper focuses on differences in an important attribute that vary across cultures -- the personal spaces -- in Brazil and Germany.
We use CNNs to detect and track people in video sequences and Voronoi Diagrams to find out the neighbor relation among people.
Based on personal spaces analyses, we found out that people behavior is more similar, in terms of their behaviours, in high dense populations and vary more in low and medium densities.
arXiv Detail & Related papers (2020-09-30T14:44:04Z) - Vyaktitv: A Multimodal Peer-to-Peer Hindi Conversations based Dataset
for Personality Assessment [50.15466026089435]
We present a novel peer-to-peer Hindi conversation dataset- Vyaktitv.
It consists of high-quality audio and video recordings of the participants, with Hinglish textual transcriptions for each conversation.
The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants.
arXiv Detail & Related papers (2020-08-31T17:44:28Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.