Domain-specific Learning of Multi-scale Facial Dynamics for Apparent
Personality Traits Prediction
- URL: http://arxiv.org/abs/2209.04148v1
- Date: Fri, 9 Sep 2022 07:08:55 GMT
- Title: Domain-specific Learning of Multi-scale Facial Dynamics for Apparent
Personality Traits Prediction
- Authors: Fang Li
- Abstract summary: We propose a novel video-based automatic personality traits recognition approach.
It consists of: (1) a textbfdomain-specific facial behavior modelling module that extracts personality-related multi-scale short-term human facial behavior features; (2) a textbflong-term behavior modelling module that summarizes all short-term features of a video as a long-term/video-level personality representation; and (3) a textbfmulti-task personality traits prediction module that models underlying relationship among all traits and jointly predict them based on the video-level personality representation.
- Score: 3.19935268158731
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Human personality decides various aspects of their daily life and working
behaviors. Since personality traits are relatively stable over time and unique
for each subject, previous approaches frequently infer personality from a
single frame or short-term behaviors. Moreover, most of them failed to
specifically extract person-specific and unique cues for personality
recognition. In this paper, we propose a novel video-based automatic
personality traits recognition approach which consists of: (1) a
\textbf{domain-specific facial behavior modelling} module that extracts
personality-related multi-scale short-term human facial behavior features; (2)
a \textbf{long-term behavior modelling} module that summarizes all short-term
features of a video as a long-term/video-level personality representation and
(3) a \textbf{multi-task personality traits prediction module} that models
underlying relationship among all traits and jointly predict them based on the
video-level personality representation. We conducted the experiments on
ChaLearn First Impression dataset, and our approach achieved comparable results
to the state-of-the-art. Importantly, we show that all three proposed modules
brought important benefits for personality recognition.
Related papers
- MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes [74.82911268630463]
Talking face generation (TFG) aims to animate a target identity's face to create realistic talking videos.
MimicTalk exploits the rich knowledge from a NeRF-based person-agnostic generic model for improving the efficiency and robustness of personalized TFG.
Experiments show that our MimicTalk surpasses previous baselines regarding video quality, efficiency, and expressiveness.
arXiv Detail & Related papers (2024-10-09T10:12:37Z) - Revealing Personality Traits: A New Benchmark Dataset for Explainable Personality Recognition on Dialogues [63.936654900356004]
Personality recognition aims to identify the personality traits implied in user data such as dialogues and social media posts.
We propose a novel task named Explainable Personality Recognition, aiming to reveal the reasoning process as supporting evidence of the personality trait.
arXiv Detail & Related papers (2024-09-29T14:41:43Z) - Enhancing Textual Personality Detection toward Social Media: Integrating Long-term and Short-term Perspectives [21.548313630700033]
Textual personality detection aims to identify personality characteristics by analyzing user-generated content toward social media platforms.
Recent literature highlighted that personality encompasses both long-term stable traits and short-term dynamic states.
arXiv Detail & Related papers (2024-04-23T14:13:53Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Editing Personality for Large Language Models [73.59001811199823]
This paper introduces an innovative task focused on editing the personality traits of Large Language Models (LLMs)
We construct PersonalityEdit, a new benchmark dataset to address this task.
arXiv Detail & Related papers (2023-10-03T16:02:36Z) - Personality-aware Human-centric Multimodal Reasoning: A New Task,
Dataset and Baselines [32.82738983843281]
We introduce a new task called Personality-aware Human-centric Multimodal Reasoning (PHMR) (T1)
The goal of the task is to forecast the future behavior of a particular individual using multimodal information from past instances, while integrating personality factors.
The experimental results demonstrate that incorporating personality traits enhances human-centric multimodal reasoning performance.
arXiv Detail & Related papers (2023-04-05T09:09:10Z) - Learning Person-specific Network Representation for Apparent Personality
Traits Recognition [3.19935268158731]
We propose to recognize apparent personality recognition approach which first trains a person-specific network for each subject.
We then encode the weights of the person-specific network to a graph representation, as the personality representation for the subject.
The experimental results show that our novel network weights-based approach achieved superior performance than most traditional latent feature-based approaches.
arXiv Detail & Related papers (2023-03-01T06:10:39Z) - Learning signatures of decision making from many individuals playing the
same game [54.33783158658077]
We design a predictive framework that learns representations to encode an individual's 'behavioral style'
We apply our method to a large-scale behavioral dataset from 1,000 humans playing a 3-armed bandit task.
arXiv Detail & Related papers (2023-02-21T21:41:53Z) - An Open-source Benchmark of Deep Learning Models for Audio-visual
Apparent and Self-reported Personality Recognition [10.59440995582639]
Personality determines a wide variety of human daily and working behaviours, and is crucial for understanding human internal and external states.
In recent years, a large number of automatic personality computing approaches have been developed to predict either the apparent personality or self-reported personality of the subject based on non-verbal audio-visual behaviours.
In the absence of a standardized benchmark with consistent experimental settings, it is not only impossible to fairly compare the real performances of these personality computing models but also makes them difficult to be reproduced.
We present the first reproducible audio-visual benchmarking framework to provide a fair and consistent evaluation of eight existing personality computing models and
arXiv Detail & Related papers (2022-10-17T14:40:04Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Characterizing Hirability via Personality and Behavior [4.187572199323744]
We examine relationships among personality and hirability measures on the emphFirst Impressions Candidate Screening dataset.
Modeling hirability as a discrete/continuous variable with the emphbig-five personality traits as predictors, we utilize (a) apparent personality annotations, and (b) personality estimates obtained via audio, visual and textual cues for hirability prediction (HP)
We also examine the efficacy of a two-step HP process involving (1) personality estimation from multimodal behavioral cues, followed by (2) HP from personality estimates.
arXiv Detail & Related papers (2020-06-22T07:24:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.