Vision based body gesture meta features for Affective Computing
- URL: http://arxiv.org/abs/2003.00809v1
- Date: Mon, 10 Feb 2020 14:38:16 GMT
- Title: Vision based body gesture meta features for Affective Computing
- Authors: Indigo J. D. Orton
- Abstract summary: I present a new type of feature, within the body modality, that represents meta information of gestures.
This differs to existing work by representing overall behaviour as a small set of aggregated meta features.
I introduce a new dataset of 65 video recordings of interviews with self-evaluated distress, personality, and demographic labels.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Early detection of psychological distress is key to effective treatment.
Automatic detection of distress, such as depression, is an active area of
research. Current approaches utilise vocal, facial, and bodily modalities. Of
these, the bodily modality is the least investigated, partially due to the
difficulty in extracting bodily representations from videos, and partially due
to the lack of viable datasets. Existing body modality approaches use automatic
categorization of expressions to represent body language as a series of
specific expressions, much like words within natural language. In this
dissertation I present a new type of feature, within the body modality, that
represents meta information of gestures, such as speed, and use it to predict a
non-clinical depression label. This differs to existing work by representing
overall behaviour as a small set of aggregated meta features derived from a
person's movement. In my method I extract pose estimation from videos, detect
gestures within body parts, extract meta information from individual gestures,
and finally aggregate these features to generate a small feature vector for use
in prediction tasks. I introduce a new dataset of 65 video recordings of
interviews with self-evaluated distress, personality, and demographic labels.
This dataset enables the development of features utilising the whole body in
distress detection tasks. I evaluate my newly introduced meta-features for
predicting depression, anxiety, perceived stress, somatic stress, five standard
personality measures, and gender. A linear regression based classifier using
these features achieves a 82.70% F1 score for predicting depression within my
novel dataset.
Related papers
- A BERT-Based Summarization approach for depression detection [1.7363112470483526]
Depression is a globally prevalent mental disorder with potentially severe repercussions if not addressed.
Machine learning and artificial intelligence can autonomously detect depression indicators from diverse data sources.
Our study proposes text summarization as a preprocessing technique to diminish the length and intricacies of input texts.
arXiv Detail & Related papers (2024-09-13T02:14:34Z) - LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model [58.887561071010985]
Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
arXiv Detail & Related papers (2024-03-12T12:10:18Z) - Naturalistic Causal Probing for Morpho-Syntax [76.83735391276547]
We suggest a naturalistic strategy for input-level intervention on real world data in Spanish.
Using our approach, we isolate morpho-syntactic features from counfounders in sentences.
We apply this methodology to analyze causal effects of gender and number on contextualized representations extracted from pre-trained models.
arXiv Detail & Related papers (2022-05-14T11:47:58Z) - Affect-DML: Context-Aware One-Shot Recognition of Human Affect using
Deep Metric Learning [29.262204241732565]
Existing methods assume that all emotions-of-interest are given a priori as annotated training examples.
We conceptualize one-shot recognition of emotions in context -- a new problem aimed at recognizing human affect states in finer particle level from a single support sample.
All variants of our model clearly outperform the random baseline, while leveraging the semantic scene context consistently improves the learnt representations.
arXiv Detail & Related papers (2021-11-30T10:35:20Z) - Learning Language and Multimodal Privacy-Preserving Markers of Mood from
Mobile Data [74.60507696087966]
Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care.
One promising data source to help monitor human behavior is daily smartphone usage.
We study behavioral markers of daily mood using a recent dataset of mobile behaviors from adolescent populations at high risk of suicidal behaviors.
arXiv Detail & Related papers (2021-06-24T17:46:03Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Regional Attention Network (RAN) for Head Pose and Fine-grained Gesture
Recognition [9.131161856493486]
We propose a novel end-to-end textbfRegional Attention Network (RAN), which is a fully Convolutional Neural Network (CNN)
Our regions consist of one or more consecutive cells and are adapted from the strategies used in computing HOG (Histogram of Oriented Gradient) descriptor.
The proposed approach outperforms the state-of-the-art by a considerable margin in different metrics.
arXiv Detail & Related papers (2021-01-17T10:14:28Z) - Pose-based Body Language Recognition for Emotion and Psychiatric Symptom
Interpretation [75.3147962600095]
We propose an automated framework for body language based emotion recognition starting from regular RGB videos.
In collaboration with psychologists, we extend the framework for psychiatric symptom prediction.
Because a specific application domain of the proposed framework may only supply a limited amount of data, the framework is designed to work on a small training set.
arXiv Detail & Related papers (2020-10-30T18:45:16Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z) - Multimodal Depression Severity Prediction from medical bio-markers using
Machine Learning Tools and Technologies [0.0]
Depression has been a leading cause of mental-health illnesses across the world.
Using behavioural cues to automate depression diagnosis and stage prediction in recent years has relatively increased.
The absence of labelled behavioural datasets and a vast amount of possible variations prove to be a major challenge in accomplishing the task.
arXiv Detail & Related papers (2020-09-11T20:44:28Z) - Looking At The Body: Automatic Analysis of Body Gestures and
Self-Adaptors in Psychological Distress [0.9624643581968987]
Psychological distress is a significant and growing issue in society.
Recent advances in pose estimation and deep learning have enabled new approaches to this modality and domain.
We propose a novel method to automatically detect self-adaptors and fidgeting, a subset of self-adaptors that has been shown to be correlated with psychological distress.
arXiv Detail & Related papers (2020-07-31T02:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.