Interpretable Video based Stress Detection with Self-Refine Chain-of-thought Reasoning
- URL: http://arxiv.org/abs/2410.09449v1
- Date: Sat, 12 Oct 2024 09:06:09 GMT
- Title: Interpretable Video based Stress Detection with Self-Refine Chain-of-thought Reasoning
- Authors: Yi Dai,
- Abstract summary: We propose a novel interpretable approach for video-based stress detection.
Our method focuses on extracting subtle behavioral and physiological cues from video sequences that indicate stress levels.
We evaluate our approach on several public and private datasets, demonstrating its superior performance in comparison to traditional video-based stress detection methods.
- Score: 4.541582055558865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stress detection is a critical area of research with significant implications for health monitoring and intervention systems. In this paper, we propose a novel interpretable approach for video-based stress detection, leveraging self-refine chain-of-thought reasoning to enhance both accuracy and transparency in decision-making processes. Our method focuses on extracting subtle behavioral and physiological cues from video sequences that indicate stress levels. By incorporating a chain-of-thought reasoning mechanism, the system refines its predictions iteratively, ensuring that the decision-making process can be traced and explained. The model also learns to self-refine through feedback loops, improving its reasoning capabilities over time. We evaluate our approach on several public and private datasets, demonstrating its superior performance in comparison to traditional video-based stress detection methods. Additionally, we provide comprehensive insights into the interpretability of the model's predictions, making the system highly valuable for applications in both healthcare and human-computer interaction domains.
Related papers
- Unified Causality Analysis Based on the Degrees of Freedom [1.2289361708127877]
This paper presents a unified method capable of identifying fundamental causal relationships between pairs of systems.
By analyzing the degrees of freedom in the system, our approach provides a more comprehensive understanding of both causal influence and hidden confounders.
This unified framework is validated through theoretical models and simulations, demonstrating its robustness and potential for broader application.
arXiv Detail & Related papers (2024-10-25T10:57:35Z) - Early stopping by correlating online indicators in neural networks [0.24578723416255746]
We propose a novel technique to identify overfitting phenomena when training the learner.
Our proposal exploits the correlation over time in a collection of online indicators.
As opposed to previous approaches focused on a single criterion, we take advantage of subsidiarities between independent assessments.
arXiv Detail & Related papers (2024-02-04T14:57:20Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Causal Analysis for Robust Interpretability of Neural Networks [0.2519906683279152]
We develop a robust interventional-based method to capture cause-effect mechanisms in pre-trained neural networks.
We apply our method to vision models trained on classification tasks.
arXiv Detail & Related papers (2023-05-15T18:37:24Z) - A Self-supervised Framework for Improved Data-Driven Monitoring of
Stress via Multi-modal Passive Sensing [7.084068935028644]
We propose a multi-modal semi-supervised framework for tracking physiological precursors of the stress response.
Our methodology enables utilizing multi-modal data of differing domains and resolutions from wearable devices.
We perform training experiments using a corpus of real-world data on perceived stress.
arXiv Detail & Related papers (2023-03-24T20:34:46Z) - An Inter-observer consistent deep adversarial training for visual
scanpath prediction [66.46953851227454]
We propose an inter-observer consistent adversarial training approach for scanpath prediction through a lightweight deep neural network.
We show the competitiveness of our approach in regard to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-14T13:22:29Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z) - Untangling tradeoffs between recurrence and self-attention in neural
networks [81.30894993852813]
We present a formal analysis of how self-attention affects gradient propagation in recurrent networks.
We prove that it mitigates the problem of vanishing gradients when trying to capture long-term dependencies.
We propose a relevancy screening mechanism that allows for a scalable use of sparse self-attention with recurrence.
arXiv Detail & Related papers (2020-06-16T19:24:25Z) - Calibrating Healthcare AI: Towards Reliable and Interpretable Deep
Predictive Models [41.58945927669956]
We argue that these two objectives are not necessarily disparate and propose to utilize prediction calibration to meet both objectives.
Our approach is comprised of a calibration-driven learning method, which is also used to design an interpretability technique based on counterfactual reasoning.
arXiv Detail & Related papers (2020-04-27T22:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.