Reinforcement learning informed evolutionary search for autonomous
systems testing
- URL: http://arxiv.org/abs/2308.12762v1
- Date: Thu, 24 Aug 2023 13:11:07 GMT
- Title: Reinforcement learning informed evolutionary search for autonomous
systems testing
- Authors: Dmytro Humeniuk, Foutse Khomh, Giuliano Antoniol
- Abstract summary: We propose augmenting the evolutionary search (ES) with a reinforcement learning (RL) agent trained using surrogate rewards derived from domain knowledge.
In our approach, known as RIGAA, we first train an RL agent to learn useful constraints of the problem and then use it to produce a certain part of the initial population of the search algorithm.
We evaluate RIGAA on two case studies: maze generation for an autonomous ant robot and road topology generation for an autonomous vehicle lane keeping assist system.
- Score: 15.210312666486029
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Evolutionary search-based techniques are commonly used for testing autonomous
robotic systems. However, these approaches often rely on computationally
expensive simulator-based models for test scenario evaluation. To improve the
computational efficiency of the search-based testing, we propose augmenting the
evolutionary search (ES) with a reinforcement learning (RL) agent trained using
surrogate rewards derived from domain knowledge. In our approach, known as
RIGAA (Reinforcement learning Informed Genetic Algorithm for Autonomous systems
testing), we first train an RL agent to learn useful constraints of the problem
and then use it to produce a certain part of the initial population of the
search algorithm. By incorporating an RL agent into the search process, we aim
to guide the algorithm towards promising regions of the search space from the
start, enabling more efficient exploration of the solution space. We evaluate
RIGAA on two case studies: maze generation for an autonomous ant robot and road
topology generation for an autonomous vehicle lane keeping assist system. In
both case studies, RIGAA converges faster to fitter solutions and produces a
better test suite (in terms of average test scenario fitness and diversity).
RIGAA also outperforms the state-of-the-art tools for vehicle lane keeping
assist system testing, such as AmbieGen and Frenetic.
Related papers
- ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning [78.42927884000673]
ExACT is an approach to combine test-time search and self-learning to build o1-like models for agentic applications.
We first introduce Reflective Monte Carlo Tree Search (R-MCTS), a novel test time algorithm designed to enhance AI agents' ability to explore decision space on the fly.
Next, we introduce Exploratory Learning, a novel learning strategy to teach agents to search at inference time without relying on any external search algorithms.
arXiv Detail & Related papers (2024-10-02T21:42:35Z) - Multi-Agent Reinforcement Learning for Autonomous Driving: A Survey [14.73689900685646]
Reinforcement Learning (RL) is a potent tool for sequential decision-making and has achieved performance surpassing human capabilities.
As the extension of RL in the multi-agent system domain, multi-agent RL (MARL) not only need to learn the control policy but also requires consideration regarding interactions with all other agents in the environment.
Simulators are crucial to obtain realistic data, which is the fundamentals of RL.
arXiv Detail & Related papers (2024-08-19T03:31:20Z) - Research on an Autonomous UAV Search and Rescue System Based on the Improved [1.3399503792039942]
This paper proposes an autonomous search and rescue UAV system based on an EGO-Planner algorithm.
It takes the methods of inverse motor backstepping to enhance the overall flight efficiency of the UAV and miniaturization of the whole machine.
At the same time, the system introduced the EGO-Planner planning tool, which is optimized by a bidirectional A* algorithm along with an object detection algorithm.
arXiv Detail & Related papers (2024-06-01T17:25:29Z) - Reinforcement Learning for Online Testing of Autonomous Driving Systems: a Replication and Extension Study [15.949975158039452]
In a recent study, Reinforcement Learning has been shown to outperform alternative techniques for online testing of Deep Neural Network-enabled systems.
This work is a replication and extension of that empirical study.
Results show that our new RL agent is able to converge to an effective policy that outperforms random testing.
arXiv Detail & Related papers (2024-03-20T16:39:17Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Modelling, Positioning, and Deep Reinforcement Learning Path Tracking
Control of Scaled Robotic Vehicles: Design and Experimental Validation [3.807917169053206]
Scaled robotic cars are commonly equipped with a hierarchical control acthiecture that includes tasks dedicated to vehicle state estimation and control.
This paper covers both aspects by proposing (i) a federeted extended Kalman filter (FEKF) and (ii) a novel deep reinforcement learning (DRL) path tracking controller trained via an expert demonstrator.
The experimentally validated model is used for (i) supporting the design of the FEKF and (ii) serving as a digital twin for training the proposed DRL-based path tracking algorithm.
arXiv Detail & Related papers (2024-01-10T14:40:53Z) - A Reinforcement Learning-assisted Genetic Programming Algorithm for Team
Formation Problem Considering Person-Job Matching [70.28786574064694]
A reinforcement learning-assisted genetic programming algorithm (RL-GP) is proposed to enhance the quality of solutions.
The hyper-heuristic rules obtained through efficient learning can be utilized as decision-making aids when forming project teams.
arXiv Detail & Related papers (2023-04-08T14:32:12Z) - Reward Uncertainty for Exploration in Preference-based Reinforcement
Learning [88.34958680436552]
We present an exploration method specifically for preference-based reinforcement learning algorithms.
Our main idea is to design an intrinsic reward by measuring the novelty based on learned reward.
Our experiments show that exploration bonus from uncertainty in learned reward improves both feedback- and sample-efficiency of preference-based RL algorithms.
arXiv Detail & Related papers (2022-05-24T23:22:10Z) - Efficient and Effective Generation of Test Cases for Pedestrian
Detection -- Search-based Software Testing of Baidu Apollo in SVL [14.482670650074885]
This paper presents a study on testing pedestrian detection and emergency braking system of the Baidu Apollo autonomous driving platform within the SVL simulator.
We propose an evolutionary automated test generation technique that generates failure-revealing scenarios for Apollo in the SVL environment.
In order to demonstrate the efficiency and effectiveness of our approach, we also report the results from a baseline random generation technique.
arXiv Detail & Related papers (2021-09-16T13:11:53Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.