Assessing Drivers' Situation Awareness in Semi-Autonomous Vehicles: ASP
based Characterisations of Driving Dynamics for Modelling Scene
Interpretation and Projection
- URL: http://arxiv.org/abs/2308.15895v1
- Date: Wed, 30 Aug 2023 09:07:49 GMT
- Title: Assessing Drivers' Situation Awareness in Semi-Autonomous Vehicles: ASP
based Characterisations of Driving Dynamics for Modelling Scene
Interpretation and Projection
- Authors: Jakob Suchan (German Aerospace Center (DLR), Oldenburg, Germany),
Jan-Patrick Osterloh (German Aerospace Center (DLR), Oldenburg, Germany)
- Abstract summary: We present a framework to asses how aware the driver is about the situation and to provide human-centred assistance.
The framework is developed as a modular system within the Robot Operating System (ROS) with modules for sensing the environment and the driver state.
A particular focus of this paper is on an Answer Set Programming (ASP) based approach for modelling and reasoning about the driver's interpretation and projection of the scene.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semi-autonomous driving, as it is already available today and will eventually
become even more accessible, implies the need for driver and automation system
to reliably work together in order to ensure safe driving. A particular
challenge in this endeavour are situations in which the vehicle's automation is
no longer able to drive and is thus requesting the human to take over. In these
situations the driver has to quickly build awareness for the traffic situation
to be able to take over control and safely drive the car. Within this context
we present a software and hardware framework to asses how aware the driver is
about the situation and to provide human-centred assistance to help in building
situation awareness. The framework is developed as a modular system within the
Robot Operating System (ROS) with modules for sensing the environment and the
driver state, modelling the driver's situation awareness, and for guiding the
driver's attention using specialized Human Machine Interfaces (HMIs).
A particular focus of this paper is on an Answer Set Programming (ASP) based
approach for modelling and reasoning about the driver's interpretation and
projection of the scene. This is based on scene data, as well as eye-tracking
data reflecting the scene elements observed by the driver. We present the
overall application and discuss the role of semantic reasoning and modelling
cognitive functions based on logic programming in such applications.
Furthermore we present the ASP approach for interpretation and projection of
the driver's situation awareness and its integration within the overall system
in the context of a real-world use-case in simulated as well as in real
driving.
Related papers
- Incorporating Explanations into Human-Machine Interfaces for Trust and Situation Awareness in Autonomous Vehicles [4.1636282808157254]
We study the role of explainable AI and human-machine interface jointly in building trust in vehicle autonomy.
We present a situation awareness framework for calibrating users' trust in self-driving behavior.
arXiv Detail & Related papers (2024-04-10T23:02:13Z) - Evaluating Driver Readiness in Conditionally Automated Vehicles from
Eye-Tracking Data and Head Pose [3.637162892228131]
In SAE Level 3 or partly automated vehicles, the driver needs to be available and ready to intervene when necessary.
This article presents a comprehensive analysis of driver readiness assessment by combining head pose features and eye-tracking data.
A Bidirectional LSTM architecture, combining both feature sets, achieves a mean absolute error of 0.363 on the DMD dataset.
arXiv Detail & Related papers (2024-01-20T17:32:52Z) - DME-Driver: Integrating Human Decision Logic and 3D Scene Perception in
Autonomous Driving [65.04871316921327]
This paper introduces a new autonomous driving system that enhances the performance and reliability of autonomous driving system.
DME-Driver utilizes a powerful vision language model as the decision-maker and a planning-oriented perception model as the control signal generator.
By leveraging this dataset, our model achieves high-precision planning accuracy through a logical thinking process.
arXiv Detail & Related papers (2024-01-08T03:06:02Z) - Analyze Drivers' Intervention Behavior During Autonomous Driving -- A
VR-incorporated Approach [2.7532019227694344]
This work sheds light on understanding human drivers' intervention behavior involved in the operation of autonomous vehicles.
Experiment environments were implemented where the virtual reality (VR) and traffic micro-simulation are integrated.
Performance indicators such as the probability of intervention, accident rates are defined and used to quantify and compare the risk levels.
arXiv Detail & Related papers (2023-12-04T06:36:57Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - On the Road with GPT-4V(ision): Early Explorations of Visual-Language
Model on Autonomous Driving [37.617793990547625]
This report provides an exhaustive evaluation of the latest state-of-the-art VLM, GPT-4V.
We explore the model's abilities to understand and reason about driving scenes, make decisions, and ultimately act in the capacity of a driver.
Our findings reveal that GPT-4V demonstrates superior performance in scene understanding and causal reasoning compared to existing autonomous systems.
arXiv Detail & Related papers (2023-11-09T12:58:37Z) - Classification of Safety Driver Attention During Autonomous Vehicle
Operation [11.33083039877258]
This paper introduces a dual-source approach integrating data from an infrared camera facing the vehicle operator and vehicle perception systems.
The proposed system effectively determines a metric for the attention levels of the vehicle operator, enabling interventions such as warnings or reducing autonomous functionality as appropriate.
arXiv Detail & Related papers (2023-10-17T22:04:42Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.