Domain-specific Learning of Multi-scale Facial Dynamics for Apparent
Personality Traits Prediction
- URL: http://arxiv.org/abs/2209.04148v1
- Date: Fri, 9 Sep 2022 07:08:55 GMT
- Title: Domain-specific Learning of Multi-scale Facial Dynamics for Apparent
Personality Traits Prediction
- Authors: Fang Li
- Abstract summary: We propose a novel video-based automatic personality traits recognition approach.
It consists of: (1) a textbfdomain-specific facial behavior modelling module that extracts personality-related multi-scale short-term human facial behavior features; (2) a textbflong-term behavior modelling module that summarizes all short-term features of a video as a long-term/video-level personality representation; and (3) a textbfmulti-task personality traits prediction module that models underlying relationship among all traits and jointly predict them based on the video-level personality representation.
- Score: 3.19935268158731
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Human personality decides various aspects of their daily life and working
behaviors. Since personality traits are relatively stable over time and unique
for each subject, previous approaches frequently infer personality from a
single frame or short-term behaviors. Moreover, most of them failed to
specifically extract person-specific and unique cues for personality
recognition. In this paper, we propose a novel video-based automatic
personality traits recognition approach which consists of: (1) a
\textbf{domain-specific facial behavior modelling} module that extracts
personality-related multi-scale short-term human facial behavior features; (2)
a \textbf{long-term behavior modelling} module that summarizes all short-term
features of a video as a long-term/video-level personality representation and
(3) a \textbf{multi-task personality traits prediction module} that models
underlying relationship among all traits and jointly predict them based on the
video-level personality representation. We conducted the experiments on
ChaLearn First Impression dataset, and our approach achieved comparable results
to the state-of-the-art. Importantly, we show that all three proposed modules
brought important benefits for personality recognition.
Related papers
- Enhancing Textual Personality Detection toward Social Media: Integrating Long-term and Short-term Perspectives [21.548313630700033]
Textual personality detection aims to identify personality characteristics by analyzing user-generated content toward social media platforms.
Recent literature highlighted that personality encompasses both long-term stable traits and short-term dynamic states.
arXiv Detail & Related papers (2024-04-23T14:13:53Z) - LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model [58.887561071010985]
Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
arXiv Detail & Related papers (2024-03-12T12:10:18Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Editing Personality for Large Language Models [73.59001811199823]
This paper introduces an innovative task focused on editing the personality traits of Large Language Models (LLMs)
We construct a new benchmark dataset PersonalityEdit to address this task.
arXiv Detail & Related papers (2023-10-03T16:02:36Z) - Personality-aware Human-centric Multimodal Reasoning: A New Task,
Dataset and Baselines [32.82738983843281]
We introduce a new task called Personality-aware Human-centric Multimodal Reasoning (PHMR) (T1)
The goal of the task is to forecast the future behavior of a particular individual using multimodal information from past instances, while integrating personality factors.
The experimental results demonstrate that incorporating personality traits enhances human-centric multimodal reasoning performance.
arXiv Detail & Related papers (2023-04-05T09:09:10Z) - Learning Person-specific Network Representation for Apparent Personality
Traits Recognition [3.19935268158731]
We propose to recognize apparent personality recognition approach which first trains a person-specific network for each subject.
We then encode the weights of the person-specific network to a graph representation, as the personality representation for the subject.
The experimental results show that our novel network weights-based approach achieved superior performance than most traditional latent feature-based approaches.
arXiv Detail & Related papers (2023-03-01T06:10:39Z) - Learning signatures of decision making from many individuals playing the
same game [54.33783158658077]
We design a predictive framework that learns representations to encode an individual's 'behavioral style'
We apply our method to a large-scale behavioral dataset from 1,000 humans playing a 3-armed bandit task.
arXiv Detail & Related papers (2023-02-21T21:41:53Z) - An Open-source Benchmark of Deep Learning Models for Audio-visual
Apparent and Self-reported Personality Recognition [10.59440995582639]
Personality determines a wide variety of human daily and working behaviours, and is crucial for understanding human internal and external states.
In recent years, a large number of automatic personality computing approaches have been developed to predict either the apparent personality or self-reported personality of the subject based on non-verbal audio-visual behaviours.
In the absence of a standardized benchmark with consistent experimental settings, it is not only impossible to fairly compare the real performances of these personality computing models but also makes them difficult to be reproduced.
We present the first reproducible audio-visual benchmarking framework to provide a fair and consistent evaluation of eight existing personality computing models and
arXiv Detail & Related papers (2022-10-17T14:40:04Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Vyaktitv: A Multimodal Peer-to-Peer Hindi Conversations based Dataset
for Personality Assessment [50.15466026089435]
We present a novel peer-to-peer Hindi conversation dataset- Vyaktitv.
It consists of high-quality audio and video recordings of the participants, with Hinglish textual transcriptions for each conversation.
The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants.
arXiv Detail & Related papers (2020-08-31T17:44:28Z) - Characterizing Hirability via Personality and Behavior [4.187572199323744]
We examine relationships among personality and hirability measures on the emphFirst Impressions Candidate Screening dataset.
Modeling hirability as a discrete/continuous variable with the emphbig-five personality traits as predictors, we utilize (a) apparent personality annotations, and (b) personality estimates obtained via audio, visual and textual cues for hirability prediction (HP)
We also examine the efficacy of a two-step HP process involving (1) personality estimation from multimodal behavioral cues, followed by (2) HP from personality estimates.
arXiv Detail & Related papers (2020-06-22T07:24:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.