Methods and Tools for Monitoring Driver's Behavior
- URL: http://arxiv.org/abs/2301.12269v2
- Date: Mon, 27 Mar 2023 18:50:07 GMT
- Title: Methods and Tools for Monitoring Driver's Behavior
- Authors: Muhammad Tanveer Jan, Sonia Moshfeghi, Joshua William Conniff, Jinwoo
Jang, Kwangsoo Yang, Jiannan Zhai, Monica Rosselli, David Newman, Ruth
Tappen, Borko Furht
- Abstract summary: In this paper we propose an innovative architecture of in-vehicle sensors and present methods and tools that are used to measure the behavior of drivers.
The proposed architecture including methods and tools are used in our NIH project to monitor and identify older drivers with early dementia.
- Score: 0.1462730735143614
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In-vehicle sensing technology has gained tremendous attention due to its
ability to support major technological developments, such as connected vehicles
and self-driving cars. In-vehicle sensing data are invaluable and important
data sources for traffic management systems. In this paper we propose an
innovative architecture of unobtrusive in-vehicle sensors and present methods
and tools that are used to measure the behavior of drivers. The proposed
architecture including methods and tools are used in our NIH project to monitor
and identify older drivers with early dementia
Related papers
- Optimized Detection and Classification on GTRSB: Advancing Traffic Sign
Recognition with Convolutional Neural Networks [0.0]
This paper presents an innovative approach leveraging CNNs that achieves an accuracy of nearly 96%.
It highlights the potential for even greater precision through advanced localization techniques.
arXiv Detail & Related papers (2024-03-13T06:28:37Z) - Improving automatic detection of driver fatigue and distraction using
machine learning [0.0]
Driver fatigue and distracted driving are important factors in traffic accidents.
We present techniques for simultaneously detecting fatigue and distracted driving behaviors using vision-based and machine learning-based approaches.
arXiv Detail & Related papers (2024-01-04T06:33:46Z) - G-MEMP: Gaze-Enhanced Multimodal Ego-Motion Prediction in Driving [71.9040410238973]
We focus on inferring the ego trajectory of a driver's vehicle using their gaze data.
Next, we develop G-MEMP, a novel multimodal ego-trajectory prediction network that combines GPS and video input with gaze data.
The results show that G-MEMP significantly outperforms state-of-the-art methods in both benchmarks.
arXiv Detail & Related papers (2023-12-13T23:06:30Z) - Applications of Computer Vision in Autonomous Vehicles: Methods, Challenges and Future Directions [2.693342141713236]
This paper reviews publications on computer vision and autonomous driving that are published during the last ten years.
In particular, we first investigate the development of autonomous driving systems and summarize these systems that are developed by the major automotive manufacturers from different countries.
Then, a comprehensive overview of computer vision applications for autonomous driving such as depth estimation, object detection, lane detection, and traffic sign recognition are discussed.
arXiv Detail & Related papers (2023-11-15T16:41:18Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - AI in Smart Cities: Challenges and approaches to enable road vehicle
automation and smart traffic control [56.73750387509709]
SCC ideates on a data-centered society aiming at improving efficiency by automating and optimizing activities and utilities.
This paper describes AI perspectives in SCC and gives an overview of AI-based technologies used in traffic to enable road vehicle automation and smart traffic control.
arXiv Detail & Related papers (2021-04-07T14:31:08Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Improved YOLOv3 Object Classification in Intelligent Transportation
System [29.002873450422083]
An algorithm based on YOLOv3 is proposed to realize the detection and classification of vehicles, drivers, and people on the highway.
The model has a good performance and is robust to road blocking, different attitudes, and extreme lighting.
arXiv Detail & Related papers (2020-04-08T11:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.