Looking At The Body: Automatic Analysis of Body Gestures and
Self-Adaptors in Psychological Distress
- URL: http://arxiv.org/abs/2007.15815v1
- Date: Fri, 31 Jul 2020 02:45:00 GMT
- Title: Looking At The Body: Automatic Analysis of Body Gestures and
Self-Adaptors in Psychological Distress
- Authors: Weizhe Lin, Indigo Orton, Qingbiao Li, Gabriela Pavarini, Marwa
Mahmoud
- Abstract summary: Psychological distress is a significant and growing issue in society.
Recent advances in pose estimation and deep learning have enabled new approaches to this modality and domain.
We propose a novel method to automatically detect self-adaptors and fidgeting, a subset of self-adaptors that has been shown to be correlated with psychological distress.
- Score: 0.9624643581968987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Psychological distress is a significant and growing issue in society.
Automatic detection, assessment, and analysis of such distress is an active
area of research. Compared to modalities such as face, head, and vocal,
research investigating the use of the body modality for these tasks is
relatively sparse. This is, in part, due to the limited available datasets and
difficulty in automatically extracting useful body features. Recent advances in
pose estimation and deep learning have enabled new approaches to this modality
and domain. To enable this research, we have collected and analyzed a new
dataset containing full body videos for short interviews and self-reported
distress labels. We propose a novel method to automatically detect
self-adaptors and fidgeting, a subset of self-adaptors that has been shown to
be correlated with psychological distress. We perform analysis on statistical
body gestures and fidgeting features to explore how distress levels affect
participants' behaviors. We then propose a multi-modal approach that combines
different feature representations using Multi-modal Deep Denoising
Auto-Encoders and Improved Fisher Vector Encoding. We demonstrate that our
proposed model, combining audio-visual features with automatically detected
fidgeting behavioral cues, can successfully predict distress levels in a
dataset labeled with self-reported anxiety and depression levels.
Related papers
- EmoScan: Automatic Screening of Depression Symptoms in Romanized Sinhala Tweets [0.0]
This work explores the utilization of Romanized Sinhala social media data to identify individuals at risk of depression.
A machine learning-based framework is presented for the automatic screening of depression symptoms by analyzing language patterns, sentiment, and behavioural cues.
arXiv Detail & Related papers (2024-03-28T10:31:09Z) - Reliability Analysis of Psychological Concept Extraction and
Classification in User-penned Text [9.26840677406494]
We use the LoST dataset to capture nuanced textual cues that suggest the presence of low self-esteem in the posts of Reddit users.
Our findings suggest the need of shifting the focus of PLMs from Trigger and Consequences to a more comprehensive explanation.
arXiv Detail & Related papers (2024-01-12T17:19:14Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - DEPAC: a Corpus for Depression and Anxiety Detection from Speech [3.2154432166999465]
We introduce a novel mental distress analysis audio dataset DEPAC, labeled based on established thresholds on depression and anxiety screening tools.
This large dataset comprises multiple speech tasks per individual, as well as relevant demographic information.
We present a feature set consisting of hand-curated acoustic and linguistic features, which were found effective in identifying signs of mental illnesses in human speech.
arXiv Detail & Related papers (2023-06-20T12:21:06Z) - What's on your mind? A Mental and Perceptual Load Estimation Framework
towards Adaptive In-vehicle Interaction while Driving [55.41644538483948]
We analyze the effects of mental workload and perceptual load on psychophysiological dimensions.
We classify the mental and perceptual load levels through the fusion of these measurements.
We report up to 89% mental workload classification accuracy and provide a real-time minimally-intrusive solution.
arXiv Detail & Related papers (2022-08-10T21:19:49Z) - Bodily Behaviors in Social Interaction: Novel Annotations and
State-of-the-Art Evaluation [0.0]
We present BBSI, the first set of annotations of complex Bodily Behaviors embedded in continuous Social Interactions.
Based on previous work in psychology, we manually annotated 26 hours of spontaneous human behavior.
We adapt the Pyramid Dilated Attention Network (PDAN), a state-of-the-art approach for human action detection.
arXiv Detail & Related papers (2022-07-26T11:24:00Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using
Deep Multiple-Instance Learning [59.74684475991192]
Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old.
PD symptoms include tremor, rigidity and braykinesia.
We present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device.
arXiv Detail & Related papers (2020-05-06T09:02:30Z) - Vision based body gesture meta features for Affective Computing [0.0]
I present a new type of feature, within the body modality, that represents meta information of gestures.
This differs to existing work by representing overall behaviour as a small set of aggregated meta features.
I introduce a new dataset of 65 video recordings of interviews with self-evaluated distress, personality, and demographic labels.
arXiv Detail & Related papers (2020-02-10T14:38:16Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.