Joint modeling for learning decision-making dynamics in behavioral experiments
- URL: http://arxiv.org/abs/2506.02394v2
- Date: Mon, 28 Jul 2025 14:47:34 GMT
- Title: Joint modeling for learning decision-making dynamics in behavioral experiments
- Authors: Yuan Bian, Xingche Guo, Yuanjia Wang,
- Abstract summary: Major depressive disorder (MDD) is a leading cause of disability and mortality.<n>We propose a novel framework that integrates the reinforcement learning model and drift-diffusion model.<n>Our framework reveals that MDD patients exhibit lower overall engagement than healthy controls.
- Score: 1.2699007098398807
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Major depressive disorder (MDD), a leading cause of disability and mortality, is associated with reward-processing abnormalities and concentration issues. Motivated by the probabilistic reward task from the Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care (EMBARC) study, we propose a novel framework that integrates the reinforcement learning (RL) model and drift-diffusion model (DDM) to jointly analyze reward-based decision-making with response times. To account for emerging evidence suggesting that decision-making may alternate between multiple interleaved strategies, we model latent state switching using a hidden Markov model (HMM). In the ''engaged'' state, decisions follow an RL-DDM, simultaneously capturing reward processing, decision dynamics, and temporal structure. In contrast, in the ''lapsed'' state, decision-making is modeled using a simplified DDM, where specific parameters are fixed to approximate random guessing with equal probability. The proposed method is implemented using a computationally efficient generalized expectation-maximization (EM) algorithm with forward-backward procedures. Through extensive numerical studies, we demonstrate that our proposed method outperforms competing approaches across various reward-generating distributions, under both strategy-switching and non-switching scenarios, as well as in the presence of input perturbations. When applied to the EMBARC study, our framework reveals that MDD patients exhibit lower overall engagement than healthy controls and experience longer decision times when they do engage. Additionally, we show that neuroimaging measures of brain activities are associated with decision-making characteristics in the ''engaged'' state but not in the ''lapsed'' state, providing evidence of brain-behavior association specific to the ''engaged'' state.
Related papers
- Spiking Neural Models for Decision-Making Tasks with Learning [0.29998889086656577]
We propose a biologically plausible Spiking Neural Network (SNN) model for decision-making that incorporates a learning mechanism.<n>This work provides a significant step toward integrating biologically relevant neural mechanisms into cognitive models.
arXiv Detail & Related papers (2025-06-10T09:19:40Z) - Dynamic Programming Techniques for Enhancing Cognitive Representation in Knowledge Tracing [125.75923987618977]
We propose the Cognitive Representation Dynamic Programming based Knowledge Tracing (CRDP-KT) model.<n>It is a dynamic programming algorithm to optimize cognitive representations based on the difficulty of the questions and the performance intervals between them.<n>It provides more accurate and systematic input features for subsequent model training, thereby minimizing distortion in the simulation of cognitive states.
arXiv Detail & Related papers (2025-06-03T14:44:48Z) - A Foundational Brain Dynamics Model via Stochastic Optimal Control [15.8358479596609]
We introduce a foundational model for brain dynamics that utilizes optimal control (SOC) and amortized inference.<n>Our method features a continuous-discrete state space model (SSM) that can robustly handle the intricate and noisy nature of fMRI signals.<n>Our model attains state-of-the-art results across a variety of downstream tasks, including demographic prediction, trait analysis, disease diagnosis, and prognosis.
arXiv Detail & Related papers (2025-02-07T12:57:26Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - HMM for Discovering Decision-Making Dynamics Using Reinforcement Learning Experiments [5.857093069873734]
Evidence indicates that reward processing abnormalities may serve as a behavioral marker for MDD.
Recent findings suggest the inadequacy of characterizing reward learning solely based on a single RL model.
We propose a novel RL-HMM framework for analyzing reward-based decision-making.
arXiv Detail & Related papers (2024-01-25T04:03:32Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Learning non-Markovian Decision-Making from State-only Sequences [57.20193609153983]
We develop a model-based imitation of state-only sequences with non-Markov Decision Process (nMDP)
We demonstrate the efficacy of the proposed method in a path planning task with non-Markovian constraints.
arXiv Detail & Related papers (2023-06-27T02:26:01Z) - Decision-Dependent Distributionally Robust Markov Decision Process
Method in Dynamic Epidemic Control [4.644416582073023]
The Susceptible-Exposed-Infectious-Recovered (SEIR) model is widely used to represent the spread of infectious diseases.
We present a Distributionally Robust Markov Decision Process (DRMDP) approach for addressing the dynamic epidemic control problem.
arXiv Detail & Related papers (2023-06-24T20:19:04Z) - On the Complexity of Adversarial Decision Making [101.14158787665252]
We show that the Decision-Estimation Coefficient is necessary and sufficient to obtain low regret for adversarial decision making.
We provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures.
arXiv Detail & Related papers (2022-06-27T06:20:37Z) - Reinforcement Learning with a Terminator [80.34572413850186]
We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.
We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret.
arXiv Detail & Related papers (2022-05-30T18:40:28Z) - Identification of brain states, transitions, and communities using
functional MRI [0.5872014229110214]
We propose a Bayesian model-based characterization of latent brain states and showcase a novel method based on posterior predictive discrepancy.
Our results obtained through an analysis of task-fMRI data show appropriate lags between external task demands and change-points between brain states.
arXiv Detail & Related papers (2021-01-26T08:10:00Z) - MM-KTD: Multiple Model Kalman Temporal Differences for Reinforcement
Learning [36.14516028564416]
This paper proposes an innovative Multiple Model Kalman Temporal Difference (MM-KTD) framework to learn optimal control policies.
An active learning method is proposed to enhance the sampling efficiency of the system.
Experimental results show superiority of the MM-KTD framework in comparison to its state-of-the-art counterparts.
arXiv Detail & Related papers (2020-05-30T06:39:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.