ForDigitStress: A multi-modal stress dataset employing a digital job
interview scenario
- URL: http://arxiv.org/abs/2303.07742v1
- Date: Tue, 14 Mar 2023 09:40:37 GMT
- Title: ForDigitStress: A multi-modal stress dataset employing a digital job
interview scenario
- Authors: Alexander Heimerl, Pooja Prajod, Silvan Mertes, Tobias Baur, Matthias
Kraus, Ailin Liu, Helen Risack, Nicolas Rohleder, Elisabeth Andr\'e, Linda
Becker
- Abstract summary: We present a multi-modal stress dataset that uses digital job interviews to induce stress.
The dataset provides multi-modal data of 40 participants including audio, video and physiological information.
In order to establish a baseline, five different machine learning classifiers have been trained and evaluated.
- Score: 48.781127275906435
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a multi-modal stress dataset that uses digital job interviews to
induce stress. The dataset provides multi-modal data of 40 participants
including audio, video (motion capturing, facial recognition, eye tracking) as
well as physiological information (photoplethysmography, electrodermal
activity). In addition to that, the dataset contains time-continuous
annotations for stress and occurred emotions (e.g. shame, anger, anxiety,
surprise). In order to establish a baseline, five different machine learning
classifiers (Support Vector Machine, K-Nearest Neighbors, Random Forest,
Long-Short-Term Memory Network) have been trained and evaluated on the proposed
dataset for a binary stress classification task. The best-performing classifier
achieved an accuracy of 88.3% and an F1-score of 87.5%.
Related papers
- MMSci: A Dataset for Graduate-Level Multi-Discipline Multimodal Scientific Understanding [59.41495657570397]
This dataset includes figures such as schematic diagrams, simulated images, macroscopic/microscopic photos, and experimental visualizations.
We developed benchmarks for scientific figure captioning and multiple-choice questions, evaluating six proprietary and over ten open-source models.
The dataset and benchmarks will be released to support further research.
arXiv Detail & Related papers (2024-07-06T00:40:53Z) - Personalization of Stress Mobile Sensing using Self-Supervised Learning [1.7598252755538808]
Stress is widely recognized as a major contributor to a variety of health issues.
Real-time stress prediction can enable digital interventions to immediately react at the onset of stress, helping to avoid many psychological and physiological symptoms such as heart rhythm irregularities.
However, major challenges with the prediction of stress using machine learning include the subjectivity and sparseness of the labels, a large feature space, relatively few labels, and a complex nonlinear and subjective relationship between the features and outcomes.
arXiv Detail & Related papers (2023-08-04T22:26:33Z) - The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked
Emotions, Cross-Cultural Humour, and Personalisation [69.13075715686622]
MuSe 2023 is a set of shared tasks addressing three different contemporary multimodal affect and sentiment analysis problems.
MuSe 2023 seeks to bring together a broad audience from different research communities.
arXiv Detail & Related papers (2023-05-05T08:53:57Z) - Transfer Learning Based Diagnosis and Analysis of Lung Sound Aberrations [0.35232085374661276]
This work attempts to develop a non-invasive technique for identifying respiratory sounds acquired by a stethoscope and voice recording software.
A visual representation of each audio sample is constructed, allowing resource identification for classification using methods like those used to effectively describe visuals.
Respiratory Sound Database obtained cutting-edge results, including accuracy of 95%, precision of 88%, recall score of 86%, and F1 score of 81%.
arXiv Detail & Related papers (2023-03-15T04:46:57Z) - Extracting Digital Biomarkers for Unobtrusive Stress State Screening
from Multimodal Wearable Data [0.0]
We explore digital biomarkers related to stress modality by examining data collected from mobile phones and smartwatches.
We utilize machine learning techniques on the Tesserae dataset, precisely Random Forest, to extract stress biomarkers.
We can achieve $85%$ overall class accuracy by adjusting class imbalance and adding extra features related to personality characteristics.
arXiv Detail & Related papers (2023-03-08T10:14:58Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - Classification of Stress via Ambulatory ECG and GSR Data [0.0]
This work empirically assesses several approaches to detect stress using physiological data recorded in an ambulatory setting with self-reported stress annotations.
The optimal stress detection approach achieves 90.77% classification accuracy, 91.24 F1-submission, 90.42 Sensitivity and 91.08 Specificity.
arXiv Detail & Related papers (2022-07-19T15:57:14Z) - ReLearn: A Robust Machine Learning Framework in Presence of Missing Data
for Multimodal Stress Detection from Physiological Signals [5.042598205771715]
We propose ReLearn, a robust machine learning framework for stress detection from biomarkers extracted from multimodal physiological signals.
ReLearn effectively copes with missing data and outliers both at training and inference phases.
Our experiments show that the proposed framework obtains a cross-validation accuracy of 86.8% even if more than 50% of samples within the features are missing.
arXiv Detail & Related papers (2021-04-29T11:53:01Z) - Vyaktitv: A Multimodal Peer-to-Peer Hindi Conversations based Dataset
for Personality Assessment [50.15466026089435]
We present a novel peer-to-peer Hindi conversation dataset- Vyaktitv.
It consists of high-quality audio and video recordings of the participants, with Hinglish textual transcriptions for each conversation.
The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants.
arXiv Detail & Related papers (2020-08-31T17:44:28Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.