Understanding Mental States in Active and Autonomous Driving with EEG
- URL: http://arxiv.org/abs/2512.09190v1
- Date: Tue, 09 Dec 2025 23:30:52 GMT
- Title: Understanding Mental States in Active and Autonomous Driving with EEG
- Authors: Prithila Angkan, Paul Hungler, Ali Etemad,
- Abstract summary: This paper presents the first EEG-based comparison of cognitive load, fatigue, and arousal across the two driving modes.<n>Although both modes evoke similar trends across complexity levels, the intensity of mental states and the underlying neural activation differ substantially.
- Score: 33.344042380346245
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Understanding how driver mental states differ between active and autonomous driving is critical for designing safe human-vehicle interfaces. This paper presents the first EEG-based comparison of cognitive load, fatigue, valence, and arousal across the two driving modes. Using data from 31 participants performing identical tasks in both scenarios of three different complexity levels, we analyze temporal patterns, task-complexity effects, and channel-wise activation differences. Our findings show that although both modes evoke similar trends across complexity levels, the intensity of mental states and the underlying neural activation differ substantially, indicating a clear distribution shift between active and autonomous driving. Transfer-learning experiments confirm that models trained on active driving data generalize poorly to autonomous driving and vice versa. We attribute this distribution shift primarily to differences in motor engagement and attentional demands between the two driving modes, which lead to distinct spatial and temporal EEG activation patterns. Although autonomous driving results in lower overall cortical activation, participants continue to exhibit measurable fluctuations in cognitive load, fatigue, valence, and arousal associated with readiness to intervene, task-evoked emotional responses, and monotony-related passive fatigue. These results emphasize the need for scenario-specific data and models when developing next-generation driver monitoring systems for autonomous vehicles.
Related papers
- DriveAgent-R1: Advancing VLM-based Autonomous Driving with Active Perception and Hybrid Thinking [33.98300989562812]
We introduce DriveAgent-R1, the first autonomous driving agent capable of active perception for planning.<n>In complex scenarios, DriveAgent-R1 proactively invokes tools to perform visual reasoning, firmly grounding its decisions in visual evidence.<n>We propose a hybrid thinking framework, inspired by human driver cognitive patterns, allowing the agent to adaptively switch between efficient text-only reasoning and robust tool-augmented visual reasoning.
arXiv Detail & Related papers (2025-07-28T14:33:15Z) - Markov Regime-Switching Intelligent Driver Model for Interpretable Car-Following Behavior [19.229274803939983]
We introduce a regime-switching framework that allows driving behavior to be governed by different IDM parameter sets.<n>We instantiate the framework using a Factorial Hidden Markov Model with IDM dynamics.
arXiv Detail & Related papers (2025-06-17T17:55:42Z) - A Driving Regime-Embedded Deep Learning Framework for Modeling Intra-Driver Heterogeneity in Multi-Scale Car-Following Dynamics [5.579243411257874]
We propose a novel data-driven car-following framework that embeds discrete driving regimes into vehicular motion predictions.<n>The proposed hybrid deep learning architecture combines Gated Recurrent Units for discrete driving regime classification with Long Short-Term Memory networks for continuous kinematic prediction.<n>The framework significantly reduces prediction errors for acceleration (maximum MSE improvement reached 58.47%), speed, and spacing metrics while reproducing critical traffic phenomena.
arXiv Detail & Related papers (2025-06-06T09:19:33Z) - Predicting Multitasking in Manual and Automated Driving with Optimal Supervisory Control [2.0794380287086214]
This paper presents a computational cognitive model that simulates human multitasking while driving.<n>Based on optimal supervisory control theory, the model predicts how multitasking adapts to variations in driving demands, interactive tasks, and automation levels.
arXiv Detail & Related papers (2025-03-23T08:56:53Z) - DriveTransformer: Unified Transformer for Scalable End-to-End Autonomous Driving [62.62464518137153]
DriveTransformer is a simplified E2E-AD framework for the ease of scaling up.<n>It is composed of three unified operations: task self-attention, sensor cross-attention, temporal cross-attention.<n>It achieves state-of-the-art performance in both simulated closed-loop benchmark Bench2Drive and real world open-loop benchmark nuScenes with high FPS.
arXiv Detail & Related papers (2025-03-07T11:41:18Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Persistent-Transient Duality: A Multi-mechanism Approach for Modeling
Human-Object Interaction [58.67761673662716]
Humans are highly adaptable, swiftly switching between different modes to handle different tasks, situations and contexts.
In Human-object interaction (HOI) activities, these modes can be attributed to two mechanisms: (1) the large-scale consistent plan for the whole activity and (2) the small-scale children interactive actions that start and end along the timeline.
This work proposes to model two concurrent mechanisms that jointly control human motion.
arXiv Detail & Related papers (2023-07-24T12:21:33Z) - Model-Based Reinforcement Learning with Isolated Imaginations [61.67183143982074]
We propose Iso-Dream++, a model-based reinforcement learning approach.
We perform policy optimization based on the decoupled latent imaginations.
This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild.
arXiv Detail & Related papers (2023-03-27T02:55:56Z) - What's on your mind? A Mental and Perceptual Load Estimation Framework
towards Adaptive In-vehicle Interaction while Driving [55.41644538483948]
We analyze the effects of mental workload and perceptual load on psychophysiological dimensions.
We classify the mental and perceptual load levels through the fusion of these measurements.
We report up to 89% mental workload classification accuracy and provide a real-time minimally-intrusive solution.
arXiv Detail & Related papers (2022-08-10T21:19:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.