FBLNet: FeedBack Loop Network for Driver Attention Prediction
- URL: http://arxiv.org/abs/2212.02096v2
- Date: Tue, 1 Aug 2023 02:08:12 GMT
- Title: FBLNet: FeedBack Loop Network for Driver Attention Prediction
- Authors: Yilong Chen, Zhixiong Nan, Tao Xiang
- Abstract summary: Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
- Score: 75.83518507463226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem of predicting driver attention from the driving perspective is
gaining increasing research focus due to its remarkable significance for
autonomous driving and assisted driving systems. The driving experience is
extremely important for safe driving,a skilled driver is able to effortlessly
predict oncoming danger (before it becomes salient) based on the driving
experience and quickly pay attention to the corresponding zones.However, the
nonobjective driving experience is difficult to model, so a mechanism
simulating the driver experience accumulation procedure is absent in existing
methods, and the current methods usually follow the technique line of saliency
prediction methods to predict driver attention. In this paper, we propose a
FeedBack Loop Network (FBLNet), which attempts to model the driving experience
accumulation procedure. By over-and-over iterations, FBLNet generates the
incremental knowledge that carries rich historically-accumulative and long-term
temporal information. The incremental knowledge in our model is like the
driving experience of humans. Under the guidance of the incremental knowledge,
our model fuses the CNN feature and Transformer feature that are extracted from
the input image to predict driver attention. Our model exhibits a solid
advantage over existing methods, achieving an outstanding performance
improvement on two driver attention benchmark datasets.
Related papers
- AHMF: Adaptive Hybrid-Memory-Fusion Model for Driver Attention Prediction [14.609639142688035]
This paper proposes an Adaptive Hybrid-Memory-Fusion (AHMF) driver attention prediction model to achieve more human-like predictions.
The model first encodes information about specific hazardous stimuli in the current scene to form working memories. Then, it adaptively retrieves similar situational experiences from the long-term memory for final prediction.
arXiv Detail & Related papers (2024-07-24T17:19:58Z) - Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - Guiding Attention in End-to-End Driving Models [49.762868784033785]
Vision-based end-to-end driving models trained by imitation learning can lead to affordable solutions for autonomous driving.
We study how to guide the attention of these models to improve their driving quality by adding a loss term during training.
In contrast to previous work, our method does not require these salient semantic maps to be available during testing time.
arXiv Detail & Related papers (2024-04-30T23:18:51Z) - TransDARC: Transformer-based Driver Activity Recognition with Latent
Space Feature Calibration [31.908276711898548]
We present a vision-based framework for recognizing secondary driver behaviours based on visual transformers and an augmented feature distribution calibration module.
Our framework consistently leads to better recognition rates, surpassing previous state-of-the-art results of the public Drive&Act benchmark on all levels.
arXiv Detail & Related papers (2022-03-02T08:14:06Z) - Early Lane Change Prediction for Automated Driving Systems Using
Multi-Task Attention-based Convolutional Neural Networks [8.60064151720158]
Lane change (LC) is one of the safety-critical manoeuvres in highway driving.
reliably predicting such manoeuvre in advance is critical for the safe and comfortable operation of automated driving systems.
This paper proposes a novel multi-task model to simultaneously estimate the likelihood of LC manoeuvres and the time-to-lane-change.
arXiv Detail & Related papers (2021-09-22T13:59:27Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Driver Drowsiness Classification Based on Eye Blink and Head Movement
Features Using the k-NN Algorithm [8.356765961526955]
This work is to extend the driver drowsiness detection in vehicles using signals of a driver monitoring camera.
For this purpose, 35 features related to the driver's eye blinking behavior and head movements are extracted in driving simulator experiments.
A concluding analysis of the best performing feature sets yields valuable insights about the influence of drowsiness on the driver's blink behavior and head movements.
arXiv Detail & Related papers (2020-09-28T12:37:38Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z) - Deep Learning with Attention Mechanism for Predicting Driver Intention
at Intersection [2.1699196439348265]
The proposed solution is promising to be applied in advanced driver assistance systems (ADAS) and as part of active safety system of autonomous vehicles.
The performance of the proposed approach is evaluated on a naturalistic driving dataset and results show that our method achieves high accuracy as well as outperforms other methods.
arXiv Detail & Related papers (2020-06-10T16:12:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.