DeepQTest: Testing Autonomous Driving Systems with Reinforcement
Learning and Real-world Weather Data
- URL: http://arxiv.org/abs/2310.05170v1
- Date: Sun, 8 Oct 2023 13:59:43 GMT
- Title: DeepQTest: Testing Autonomous Driving Systems with Reinforcement
Learning and Real-world Weather Data
- Authors: Chengjie Lu, Tao Yue, Man Zhang, Shaukat Ali
- Abstract summary: We present a novel testing approach for autonomous driving systems (ADSs) using reinforcement learning (RL)
DeepQTest employs RL to learn environment configurations with a high chance of revealing abnormal ADS behaviors.
To ensure the realism of generated scenarios, DeepQTest defines a set of realistic constraints and introduces real-world weather conditions.
- Score: 12.106514312408228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous driving systems (ADSs) are capable of sensing the environment and
making driving decisions autonomously. These systems are safety-critical, and
testing them is one of the important approaches to ensure their safety.
However, due to the inherent complexity of ADSs and the high dimensionality of
their operating environment, the number of possible test scenarios for ADSs is
infinite. Besides, the operating environment of ADSs is dynamic, continuously
evolving, and full of uncertainties, which requires a testing approach adaptive
to the environment. In addition, existing ADS testing techniques have limited
effectiveness in ensuring the realism of test scenarios, especially the realism
of weather conditions and their changes over time. Recently, reinforcement
learning (RL) has demonstrated great potential in addressing challenging
problems, especially those requiring constant adaptations to dynamic
environments. To this end, we present DeepQTest, a novel ADS testing approach
that uses RL to learn environment configurations with a high chance of
revealing abnormal ADS behaviors. Specifically, DeepQTest employs Deep
Q-Learning and adopts three safety and comfort measures to construct the reward
functions. To ensure the realism of generated scenarios, DeepQTest defines a
set of realistic constraints and introduces real-world weather conditions into
the simulated environment. We employed three comparison baselines, i.e.,
random, greedy, and a state-of-the-art RL-based approach DeepCOllision, for
evaluating DeepQTest on an industrial-scale ADS. Evaluation results show that
DeepQTest demonstrated significantly better effectiveness in terms of
generating scenarios leading to collisions and ensuring scenario realism
compared with the baselines. In addition, among the three reward functions
implemented in DeepQTest, Time-To-Collision is recommended as the best design
according to our study.
Related papers
- Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - HAZARD Challenge: Embodied Decision Making in Dynamically Changing
Environments [93.94020724735199]
HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind.
This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines.
arXiv Detail & Related papers (2024-01-23T18:59:43Z) - First-principles Based 3D Virtual Simulation Testing for Discovering
SOTIF Corner Cases of Autonomous Driving [5.582213904792781]
This paper proposes a first-principles based sensor modeling and environment interaction scheme, and integrates it into CARLA simulator.
A meta-heuristic algorithm is designed based on several empirical insights, which guide both seed scenarios and mutations.
Under identical simulation setups, our algorithm discovers about four times as many corner cases as compared to state-of-the-art work.
arXiv Detail & Related papers (2024-01-22T12:02:32Z) - DroneReqValidator: Facilitating High Fidelity Simulation Testing for
Uncrewed Aerial Systems Developers [8.290044674335473]
sUAS developers aim to validate the reliability and safety of their applications through simulation testing.
The dynamic nature of the real-world environment causes unique software faults that may only be revealed through field testing.
DroneReqValidator (DRV) offers a comprehensive small Unmanned Aerial Vehicle (sUAV) simulation ecosystem.
arXiv Detail & Related papers (2023-07-31T22:13:57Z) - Boundary State Generation for Testing and Improvement of Autonomous Driving Systems [8.670873561640903]
We present GENBO, a novel test generator for autonomous driving systems (ADSs) testing.
We use such boundary conditions to augment the initial training dataset and retrain the DNN model under test.
Our evaluation results show that the retrained model has, on average, up to 3x higher success rate on a separate set of evaluation tracks with respect to the original DNN model.
arXiv Detail & Related papers (2023-07-20T05:07:51Z) - A Requirements-Driven Platform for Validating Field Operations of Small
Uncrewed Aerial Vehicles [48.67061953896227]
DroneReqValidator (DRV) allows sUAS developers to define the operating context, configure multi-sUAS mission requirements, specify safety properties, and deploy their own custom sUAS applications in a high-fidelity 3D environment.
The DRV Monitoring system collects runtime data from sUAS and the environment, analyzes compliance with safety properties, and captures violations.
arXiv Detail & Related papers (2023-07-01T02:03:49Z) - Many-Objective Reinforcement Learning for Online Testing of DNN-Enabled
Systems [0.6690874707758508]
Deep Neural Networks (DNNs) have been widely used to perform real-world tasks in cyber-physical systems such as Autonomous Diving Systems (ADS)
Ensuring the correct behavior of such DNN-Enabled Systems (DES) is a crucial topic.
Online testing is one of the promising modes for testing such systems with their application environments (simulated or real) in a closed loop.
We present MORLOT, a novel online testing approach to address these challenges by combining Reinforcement Learning (RL) and many-objective search.
arXiv Detail & Related papers (2022-10-27T13:53:37Z) - Learning to Walk Autonomously via Reset-Free Quality-Diversity [73.08073762433376]
Quality-Diversity algorithms can discover large and complex behavioural repertoires consisting of both diverse and high-performing skills.
Existing QD algorithms need large numbers of evaluations as well as episodic resets, which require manual human supervision and interventions.
This paper proposes Reset-Free Quality-Diversity optimization (RF-QD) as a step towards autonomous learning for robotics in open-ended environments.
arXiv Detail & Related papers (2022-04-07T14:07:51Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - A Survey on Scenario-Based Testing for Automated Driving Systems in
High-Fidelity Simulation [26.10081199009559]
Testing the system on the road is the closest to real-world and desirable approach, but it is incredibly costly.
A popular alternative is to evaluate an ADS's performance in some well-designed challenging scenarios, a.k.a. scenario-based testing.
High-fidelity simulators have been widely used in this setting to maximize flexibility and convenience in testing what-if scenarios.
arXiv Detail & Related papers (2021-12-02T03:41:33Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.