VTD: Visual and Tactile Database for Driver State and Behavior Perception
- URL: http://arxiv.org/abs/2412.04888v1
- Date: Fri, 06 Dec 2024 09:31:40 GMT
- Title: VTD: Visual and Tactile Database for Driver State and Behavior Perception
- Authors: Jie Wang, Mobing Cai, Zhongpan Zhu, Hongjun Ding, Jiwei Yi, Aimin Du,
- Abstract summary: We introduce a novel visual-tactile perception method to address subjective uncertainties in driver state and interaction behaviors.
A comprehensive dataset has been developed that encompasses multi-modal data under fatigue and distraction conditions.
- Score: 1.6277623188953556
- License:
- Abstract: In the domain of autonomous vehicles, the human-vehicle co-pilot system has garnered significant research attention. To address the subjective uncertainties in driver state and interaction behaviors, which are pivotal to the safety of Human-in-the-loop co-driving systems, we introduce a novel visual-tactile perception method. Utilizing a driving simulation platform, a comprehensive dataset has been developed that encompasses multi-modal data under fatigue and distraction conditions. The experimental setup integrates driving simulation with signal acquisition, yielding 600 minutes of fatigue detection data from 15 subjects and 102 takeover experiments with 17 drivers. The dataset, synchronized across modalities, serves as a robust resource for advancing cross-modal driver behavior perception algorithms.
Related papers
- Driver Assistance System Based on Multimodal Data Hazard Detection [0.0]
This paper proposes a multimodal driver assistance detection system.
It integrates road condition video, driver facial video, and audio data to enhance incident recognition accuracy.
arXiv Detail & Related papers (2025-02-05T09:02:39Z) - DISC: Dataset for Analyzing Driving Styles In Simulated Crashes for Mixed Autonomy [13.365522429680547]
DISC (Driving Styles In Simulated Crashes) is one of the first datasets to capture driving styles in pre-crash scenarios for mixed autonomy analysis.
DISC includes over 8 classes of driving styles/behaviors from hundreds of drivers navigating a simulated vehicle.
Data was collected through a driver-centric study involving human drivers encountering twelve simulated accident scenarios.
arXiv Detail & Related papers (2025-01-28T15:45:25Z) - Traffic and Safety Rule Compliance of Humans in Diverse Driving Situations [48.924085579865334]
Analyzing human data is crucial for developing autonomous systems that replicate safe driving practices.
This paper presents a comparative evaluation of human compliance with traffic and safety rules across multiple trajectory prediction datasets.
arXiv Detail & Related papers (2024-11-04T09:21:00Z) - Federated Learning for Drowsiness Detection in Connected Vehicles [0.19116784879310028]
Driver monitoring systems can assist in determining the driver's state.
Driver drowsiness detection presents a potential solution.
transmitting the data to a central machine for model training is impractical due to the large data size and privacy concerns.
We propose a federated learning framework for drowsiness detection within a vehicular network, leveraging the YawDD dataset.
arXiv Detail & Related papers (2024-05-06T09:39:13Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - Augmented Driver Behavior Models for High-Fidelity Simulation Study of
Crash Detection Algorithms [2.064612766965483]
We present a simulation platform for a hybrid transportation system that includes both human-driven and automated vehicles.
We decompose the human driving task and offer a modular approach to simulating a large-scale traffic scenario.
We analyze a large driving dataset to extract expressive parameters that would best describe different driving characteristics.
arXiv Detail & Related papers (2022-08-10T19:59:16Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - Autonomous Vehicles that Alert Humans to Take-Over Controls: Modeling
with Real-World Data [11.007092387379076]
This study focuses on the development of contextual, semantically meaningful representations of the driver state.
We conduct a large-scale real-world controlled data study where participants are instructed to take-over control from an autonomous agent.
These take-over events are captured using multiple driver-facing cameras, which when labelled result in a dataset of control transitions and their corresponding take-over times (TOTs)
After augmenting this dataset, we develop and train TOT models that operate sequentially on low and mid-level features produced by computer vision algorithms operating on different driver-facing camera views.
arXiv Detail & Related papers (2021-04-23T09:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.