CoCAtt: A Cognitive-Conditioned Driver Attention Dataset (Supplementary
Material)
- URL: http://arxiv.org/abs/2207.04028v1
- Date: Fri, 8 Jul 2022 17:35:17 GMT
- Title: CoCAtt: A Cognitive-Conditioned Driver Attention Dataset (Supplementary
Material)
- Authors: Yuan Shen, Niviru Wijayaratne, Pranav Sriram, Aamir Hasan, Peter Du,
and Katherine Driggs-Campbell
- Abstract summary: Driver attention prediction can play an instrumental role in mitigating and preventing high-risk events.
We present a new driver attention dataset, CoCAtt.
CoCAtt is the largest and the most diverse driver attention dataset in terms of autonomy levels, eye tracker resolutions, and driving scenarios.
- Score: 31.888206001447625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of driver attention prediction has drawn considerable interest among
researchers in robotics and the autonomous vehicle industry. Driver attention
prediction can play an instrumental role in mitigating and preventing high-risk
events, like collisions and casualties. However, existing driver attention
prediction models neglect the distraction state and intention of the driver,
which can significantly influence how they observe their surroundings. To
address these issues, we present a new driver attention dataset, CoCAtt
(Cognitive-Conditioned Attention). Unlike previous driver attention datasets,
CoCAtt includes per-frame annotations that describe the distraction state and
intention of the driver. In addition, the attention data in our dataset is
captured in both manual and autopilot modes using eye-tracking devices of
different resolutions. Our results demonstrate that incorporating the above two
driver states into attention modeling can improve the performance of driver
attention prediction. To the best of our knowledge, this work is the first to
provide autopilot attention data. Furthermore, CoCAtt is currently the largest
and the most diverse driver attention dataset in terms of autonomy levels, eye
tracker resolutions, and driving scenarios. CoCAtt is available for download at
https://cocatt-dataset.github.io.
Related papers
- Data Limitations for Modeling Top-Down Effects on Drivers' Attention [12.246649738388388]
Driving is a visuomotor task, i.e., there is a connection between what drivers see and what they do.
Some models of drivers' gaze account for top-down effects of drivers' actions.
The majority learn only bottom-up correlations between human gaze and driving footage.
arXiv Detail & Related papers (2024-04-12T18:23:00Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Cognitive Accident Prediction in Driving Scenes: A Multimodality
Benchmark [77.54411007883962]
We propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training.
CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module.
We construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames.
arXiv Detail & Related papers (2022-12-19T11:43:02Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Where and What: Driver Attention-based Object Detection [13.5947650184579]
We bridge the gap between pixel-level and object-level attention prediction.
Our framework achieves competitive state-of-the-art performance on both pixel-level and object-level.
arXiv Detail & Related papers (2022-04-26T08:38:22Z) - CoCAtt: A Cognitive-Conditioned Driver Attention Dataset [16.177399201198636]
Driver attention prediction can play an instrumental role in mitigating and preventing high-risk events.
We present a new driver attention dataset, CoCAtt.
CoCAtt is the largest and the most diverse driver attention dataset in terms of autonomy levels, eye tracker resolutions, and driving scenarios.
arXiv Detail & Related papers (2021-11-19T02:42:34Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - The Multimodal Driver Monitoring Database: A Naturalistic Corpus to
Study Driver Attention [44.94118128276982]
A smart vehicle should be able to monitor the actions and behaviors of the human driver to provide critical warnings or intervene when necessary.
Recent advancements in deep learning and computer vision have shown great promise in monitoring human behaviors and activities.
A vast amount of in-domain data is required to train models that provide high performance in predicting driving related tasks.
arXiv Detail & Related papers (2020-12-23T16:37:17Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z) - When Do Drivers Concentrate? Attention-based Driver Behavior Modeling
With Deep Reinforcement Learning [8.9801312307912]
We propose an actor-critic method to approximate a driver' s action according to observations and measure the driver' s attention allocation.
Considering reaction time, we construct the attention mechanism in the actor network to capture temporal dependencies of consecutive observations.
We conduct experiments on real-world vehicle trajectory datasets and show that the accuracy of our proposed approach outperforms seven baseline algorithms.
arXiv Detail & Related papers (2020-02-26T09:56:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.