Driver Assistance System Based on Multimodal Data Hazard Detection
- URL: http://arxiv.org/abs/2502.03005v1
- Date: Wed, 05 Feb 2025 09:02:39 GMT
- Title: Driver Assistance System Based on Multimodal Data Hazard Detection
- Authors: Long Zhouxiang, Ovanes Petrosian,
- Abstract summary: This paper proposes a multimodal driver assistance detection system.
It integrates road condition video, driver facial video, and audio data to enhance incident recognition accuracy.
- Score: 0.0
- License:
- Abstract: Autonomous driving technology has advanced significantly, yet detecting driving anomalies remains a major challenge due to the long-tailed distribution of driving events. Existing methods primarily rely on single-modal road condition video data, which limits their ability to capture rare and unpredictable driving incidents. This paper proposes a multimodal driver assistance detection system that integrates road condition video, driver facial video, and audio data to enhance incident recognition accuracy. Our model employs an attention-based intermediate fusion strategy, enabling end-to-end learning without separate feature extraction. To support this approach, we develop a new three-modality dataset using a driving simulator. Experimental results demonstrate that our method effectively captures cross-modal correlations, reducing misjudgments and improving driving safety.
Related papers
- VTD: Visual and Tactile Database for Driver State and Behavior Perception [1.6277623188953556]
We introduce a novel visual-tactile perception method to address subjective uncertainties in driver state and interaction behaviors.
A comprehensive dataset has been developed that encompasses multi-modal data under fatigue and distraction conditions.
arXiv Detail & Related papers (2024-12-06T09:31:40Z) - Efficient Mixture-of-Expert for Video-based Driver State and Physiological Multi-task Estimation in Conditional Autonomous Driving [12.765198683804094]
Road safety remains a critical challenge worldwide, with approximately 1.35 million fatalities annually attributed to traffic accidents.
We propose a novel multi-task DMS, termed VDMoE, which leverages RGB video input to monitor driver states non-invasively.
arXiv Detail & Related papers (2024-10-28T14:49:18Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Unsupervised Driving Event Discovery Based on Vehicle CAN-data [62.997667081978825]
This work presents a simultaneous clustering and segmentation approach for vehicle CAN-data that identifies common driving events in an unsupervised manner.
We evaluate our approach with a dataset of real Tesla Model 3 vehicle CAN-data and a two-hour driving session that we annotated with different driving events.
arXiv Detail & Related papers (2023-01-12T13:10:47Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [50.936478241688114]
Nonobjective driving experience is difficult to model, so a mechanism simulating the driver experience accumulation procedure is absent in existing methods.
We propose a FeedBack Loop Network (FBLNet), which attempts to model the driving experience accumulation procedure.
Our model exhibits a solid advantage over existing methods, achieving an outstanding performance improvement on two driver attention benchmark datasets.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Driver Anomaly Detection: A Dataset and Contrastive Learning Approach [17.020790792750457]
We propose a contrastive learning approach to learn a metric to differentiate normal driving from anomalous driving.
Our method reaches 0.9673 AUC on the test set, demonstrating the effectiveness of the contrastive learning approach on the anomaly detection task.
arXiv Detail & Related papers (2020-09-30T13:23:21Z) - Improved YOLOv3 Object Classification in Intelligent Transportation
System [29.002873450422083]
An algorithm based on YOLOv3 is proposed to realize the detection and classification of vehicles, drivers, and people on the highway.
The model has a good performance and is robust to road blocking, different attitudes, and extreme lighting.
arXiv Detail & Related papers (2020-04-08T11:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.