LADRI: LeArning-based Dynamic Risk Indicator in Automated Driving System
- URL: http://arxiv.org/abs/2401.02199v1
- Date: Thu, 4 Jan 2024 11:09:15 GMT
- Title: LADRI: LeArning-based Dynamic Risk Indicator in Automated Driving System
- Authors: Anil Ranjitbhai Patel and Peter Liggesmeyer
- Abstract summary: This paper introduces a framework for real-time Dynamic Risk Assessment in Automated Driving Systems (ADS)
Our proposed solution transcends these limitations, drawing upon Artificial Neural Networks (ANNs) to meticulously analyze and categorize risk dimensions.
- Score: 0.38073142980732994
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: As the horizon of intelligent transportation expands with the evolution of
Automated Driving Systems (ADS), ensuring paramount safety becomes more
imperative than ever. Traditional risk assessment methodologies, primarily
crafted for human-driven vehicles, grapple to adequately adapt to the
multifaceted, evolving environments of ADS. This paper introduces a framework
for real-time Dynamic Risk Assessment (DRA) in ADS, harnessing the potency of
Artificial Neural Networks (ANNs).
Our proposed solution transcends these limitations, drawing upon ANNs, a
cornerstone of deep learning, to meticulously analyze and categorize risk
dimensions using real-time On-board Sensor (OBS) data. This learning-centric
approach not only elevates the ADS's situational awareness but also enriches
its understanding of immediate operational contexts. By dissecting OBS data,
the system is empowered to pinpoint its current risk profile, thereby enhancing
safety prospects for onboard passengers and the broader traffic ecosystem.
Through this framework, we chart a direction in risk assessment, bridging the
conventional voids and enhancing the proficiency of ADS. By utilizing ANNs, our
methodology offers a perspective, allowing ADS to adeptly navigate and react to
potential risk factors, ensuring safer and more informed autonomous journeys.
Related papers
- RiskNet: Interaction-Aware Risk Forecasting for Autonomous Driving in Long-Tail Scenarios [6.024186631622774]
RiskNet is an interaction-aware risk forecasting framework for autonomous vehicles.
It integrates deterministic risk modeling with probabilistic behavior prediction for comprehensive risk assessment.
It supports real-time, scenario-adaptive risk forecasting and demonstrates strong generalization across uncertain driving environments.
arXiv Detail & Related papers (2025-04-22T02:36:54Z) - INSIGHT: Enhancing Autonomous Driving Safety through Vision-Language Models on Context-Aware Hazard Detection and Edge Case Evaluation [7.362380225654904]
INSIGHT is a hierarchical vision-language model (VLM) framework designed to enhance hazard detection and edge-case evaluation.
By using multimodal data fusion, our approach integrates semantic and visual representations, enabling precise interpretation of driving scenarios.
Experimental results on the BDD100K dataset demonstrate a substantial improvement in hazard prediction straightforwardness and accuracy over existing models.
arXiv Detail & Related papers (2025-02-01T01:43:53Z) - Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.
We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.
We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - Passenger hazard perception based on EEG signals for highly automated driving vehicles [23.322910031715583]
This study explores neural mechanisms in passenger-vehicle interactions, leading to the development of a Passenger Cognitive Model (PCM) and the Passenger EEG Decoding Strategy (PEDS)
Central to PEDS is a novel Convolutional Recurrent Neural Network (CRNN) that captures spatial and temporal EEG data patterns.
Our findings highlight the predictive power of pre-event EEG data, enhancing the detection of hazardous scenarios and offering a network-driven framework for safer autonomous vehicles.
arXiv Detail & Related papers (2024-08-29T07:32:30Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Risk-aware Trajectory Prediction by Incorporating Spatio-temporal Traffic Interaction Analysis [3.7414278978078204]
We propose to gain this information by analyzing locations and speeds that commonly correspond to high-risk interactions within the dataset.
We use it within training to generate better predictions in high risk situations.
arXiv Detail & Related papers (2024-07-15T11:57:06Z) - Risk Scenario Generation for Autonomous Driving Systems based on Causal Bayesian Networks [4.172581773205466]
We propose a novel paradigm shift towards utilizing Causal Bayesian Networks (CBN) for scenario generation in Autonomous Driving Systems (ADS)
CBN is built and validated using Maryland accident data, providing a deeper insight into the myriad factors influencing autonomous driving behaviors.
An end-to-end testing framework for ADS is established utilizing the CARLA simulator.
arXiv Detail & Related papers (2024-05-25T05:26:55Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Adaptive Risk Tendency: Nano Drone Navigation in Cluttered Environments
with Distributional Reinforcement Learning [17.940958199767234]
We present a distributional reinforcement learning framework to learn adaptive risk tendency policies.
We show our algorithm can adjust its risk-sensitivity on the fly both in simulation and real-world experiments.
arXiv Detail & Related papers (2022-03-28T13:39:58Z) - I Know You Can't See Me: Dynamic Occlusion-Aware Safety Validation of
Strategic Planners for Autonomous Vehicles Using Hypergames [12.244501203346566]
We develop a novel multi-agent dynamic occlusion risk measure for assessing situational risk.
We present a white-box, scenario-based, accelerated safety validation framework for assessing safety of strategic planners in AV.
arXiv Detail & Related papers (2021-09-20T19:38:14Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.