From Spoken Thoughts to Automated Driving Commentary: Predicting and
Explaining Intelligent Vehicles' Actions
- URL: http://arxiv.org/abs/2204.09109v2
- Date: Sat, 4 Jun 2022 20:29:28 GMT
- Title: From Spoken Thoughts to Automated Driving Commentary: Predicting and
Explaining Intelligent Vehicles' Actions
- Authors: Daniel Omeiza, Sule Anjomshoae, Helena Webb, Marina Jirotka, Lars
Kunze
- Abstract summary: In commentary driving, drivers verbalise their observations, assessments and intentions.
By speaking out their thoughts, both learning and expert drivers are able to create a better understanding and awareness of their surroundings.
In the intelligent vehicle context, automated driving commentary can provide intelligible explanations about driving actions.
- Score: 10.557942353553859
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In commentary driving, drivers verbalise their observations, assessments and
intentions. By speaking out their thoughts, both learning and expert drivers
are able to create a better understanding and awareness of their surroundings.
In the intelligent vehicle context, automated driving commentary can provide
intelligible explanations about driving actions, thereby assisting a driver or
an end-user during driving operations in challenging and safety-critical
scenarios. In this paper, we conducted a field study in which we deployed a
research vehicle in an urban environment to obtain data. While collecting
sensor data of the vehicle's surroundings, we obtained driving commentary from
a driving instructor using the think-aloud protocol. We analysed the driving
commentary and uncovered an explanation style; the driver first announces his
observations, announces his plans, and then makes general remarks. He also
makes counterfactual comments. We successfully demonstrated how factual and
counterfactual natural language explanations that follow this style could be
automatically generated using a transparent tree-based approach. Generated
explanations for longitudinal actions (e.g., stop and move) were deemed more
intelligible and plausible by human judges compared to lateral actions, such as
lane changes. We discussed how our approach can be built on in the future to
realise more robust and effective explainability for driver assistance as well
as partial and conditional automation of driving functions.
Related papers
- Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Applications of Computer Vision in Autonomous Vehicles: Methods, Challenges and Future Directions [2.693342141713236]
This paper reviews publications on computer vision and autonomous driving that are published during the last ten years.
In particular, we first investigate the development of autonomous driving systems and summarize these systems that are developed by the major automotive manufacturers from different countries.
Then, a comprehensive overview of computer vision applications for autonomous driving such as depth estimation, object detection, lane detection, and traffic sign recognition are discussed.
arXiv Detail & Related papers (2023-11-15T16:41:18Z) - Effects of Explanation Specificity on Passengers in Autonomous Driving [9.855051716204002]
We investigate the effects of natural language explanations' specificity on passengers in autonomous driving.
We generated auditory natural language explanations with different levels of specificity (abstract and specific)
Our results showed that both abstract and specific explanations had similar positive effects on passengers' perceived safety and the feeling of anxiety.
arXiv Detail & Related papers (2023-07-02T18:40:05Z) - Data and Knowledge for Overtaking Scenarios in Autonomous Driving [0.0]
The overtaking maneuver is one of the most critical actions of driving.
Despite the amount of work available in the literature, just a few handle overtaking maneuvers.
This work contributes in this area by presenting a new synthetic dataset whose focus is the overtaking maneuver.
arXiv Detail & Related papers (2023-05-30T21:27:05Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Explainability of vision-based autonomous driving systems: Review and
challenges [33.720369945541805]
The need for explainability is strong in driving, a safety-critical application.
This survey gathers contributions from several research fields, namely computer vision, deep learning, autonomous driving, explainable AI (X-AI)
arXiv Detail & Related papers (2021-01-13T19:09:38Z) - Studying Person-Specific Pointing and Gaze Behavior for Multimodal
Referencing of Outside Objects from a Moving Vehicle [58.720142291102135]
Hand pointing and eye gaze have been extensively investigated in automotive applications for object selection and referencing.
Existing outside-the-vehicle referencing methods focus on a static situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints.
We investigate the specific characteristics of each modality and the interaction between them when used in the task of referencing outside objects.
arXiv Detail & Related papers (2020-09-23T14:56:19Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z) - To Explain or Not to Explain: A Study on the Necessity of Explanations
for Autonomous Vehicles [26.095533634997786]
We present a self-driving explanation dataset with first-person explanations and associated measures of the necessity for 1103 video clips.
Our research reveals that driver types and driving scenarios dictate whether an explanation is necessary.
In particular, people tend to agree on the necessity for near-crash events but hold different opinions on ordinary or anomalous driving situations.
arXiv Detail & Related papers (2020-06-21T00:38:24Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.