Many-Objective Reinforcement Learning for Online Testing of DNN-Enabled
Systems
- URL: http://arxiv.org/abs/2210.15432v1
- Date: Thu, 27 Oct 2022 13:53:37 GMT
- Title: Many-Objective Reinforcement Learning for Online Testing of DNN-Enabled
Systems
- Authors: Fitash Ul Haq, Donghwan Shin, Lionel Briand
- Abstract summary: Deep Neural Networks (DNNs) have been widely used to perform real-world tasks in cyber-physical systems such as Autonomous Diving Systems (ADS)
Ensuring the correct behavior of such DNN-Enabled Systems (DES) is a crucial topic.
Online testing is one of the promising modes for testing such systems with their application environments (simulated or real) in a closed loop.
We present MORLOT, a novel online testing approach to address these challenges by combining Reinforcement Learning (RL) and many-objective search.
- Score: 0.6690874707758508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have been widely used to perform real-world tasks
in cyber-physical systems such as Autonomous Diving Systems (ADS). Ensuring the
correct behavior of such DNN-Enabled Systems (DES) is a crucial topic. Online
testing is one of the promising modes for testing such systems with their
application environments (simulated or real) in a closed loop taking into
account the continuous interaction between the systems and their environments.
However, the environmental variables (e.g., lighting conditions) that might
change during the systems' operation in the real world, causing the DES to
violate requirements (safety, functional), are often kept constant during the
execution of an online test scenario due to the two major challenges: (1) the
space of all possible scenarios to explore would become even larger if they
changed and (2) there are typically many requirements to test simultaneously.
In this paper, we present MORLOT (Many-Objective Reinforcement Learning for
Online Testing), a novel online testing approach to address these challenges by
combining Reinforcement Learning (RL) and many-objective search. MORLOT
leverages RL to incrementally generate sequences of environmental changes while
relying on many-objective search to determine the changes so that they are more
likely to achieve any of the uncovered objectives. We empirically evaluate
MORLOT using CARLA, a high-fidelity simulator widely used for autonomous
driving research, integrated with Transfuser, a DNN-enabled ADS for end-to-end
driving. The evaluation results show that MORLOT is significantly more
effective and efficient than alternatives with a large effect size. In other
words, MORLOT is a good option to test DES with dynamically changing
environments while accounting for multiple safety requirements.
Related papers
- Evaluating the Effectiveness of Video Anomaly Detection in the Wild: Online Learning and Inference for Real-world Deployment [2.1374208474242815]
Video Anomaly Detection (VAD) identifies unusual activities in video streams, a key technology with broad applications ranging from surveillance to healthcare.
Tackling VAD in real-life settings poses significant challenges due to the dynamic nature of human actions, environmental variations, and domain shifts.
Online learning is a potential strategy to mitigate this issue by allowing models to adapt to new information continuously.
arXiv Detail & Related papers (2024-04-29T14:47:32Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Fast-Slow Test-Time Adaptation for Online Vision-and-Language Navigation [67.18144414660681]
We propose a Fast-Slow Test-Time Adaptation (FSTTA) approach for online Vision-and-Language Navigation (VLN)
Our method obtains impressive performance gains on four popular benchmarks.
arXiv Detail & Related papers (2023-11-22T07:47:39Z) - GARL: Genetic Algorithm-Augmented Reinforcement Learning to Detect Violations in Marker-Based Autonomous Landing Systems [0.7461036096470347]
Traditional offline testing methods miss violation cases caused by dynamic objects like people and animals.
Online testing methods require extensive training time, which is impractical with limited budgets.
We introduce GARL, a framework combining a genetic algorithm (GA) and reinforcement learning (RL) for efficient generation of diverse and real landing system failures.
arXiv Detail & Related papers (2023-10-11T10:54:01Z) - DeepQTest: Testing Autonomous Driving Systems with Reinforcement
Learning and Real-world Weather Data [12.106514312408228]
We present a novel testing approach for autonomous driving systems (ADSs) using reinforcement learning (RL)
DeepQTest employs RL to learn environment configurations with a high chance of revealing abnormal ADS behaviors.
To ensure the realism of generated scenarios, DeepQTest defines a set of realistic constraints and introduces real-world weather conditions.
arXiv Detail & Related papers (2023-10-08T13:59:43Z) - Self-Sustaining Multiple Access with Continual Deep Reinforcement
Learning for Dynamic Metaverse Applications [17.436875530809946]
The Metaverse is a new paradigm that aims to create a virtual environment consisting of numerous worlds, each of which will offer a different set of services.
To deal with such a dynamic and complex scenario, one potential approach is to adopt self-sustaining strategies.
This paper investigates the problem of multiple access in multi-channel environments to maximize the throughput of the intelligent agent.
arXiv Detail & Related papers (2023-09-18T22:02:47Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Towards Automated Safety Coverage and Testing for Autonomous Vehicles
with Reinforcement Learning [0.3683202928838613]
Validation puts the autonomous vehicle system to the test in scenarios or situations that the system would likely encounter in everyday driving.
We propose using reinforcement learning (RL) to generate failure examples and unexpected traffic situations for the AV software implementation.
arXiv Detail & Related papers (2020-05-22T19:00:38Z) - From Simulation to Real World Maneuver Execution using Deep
Reinforcement Learning [69.23334811890919]
Deep Reinforcement Learning has proved to be able to solve many control tasks in different fields, but the behavior of these systems is not always as expected when deployed in real-world scenarios.
This is mainly due to the lack of domain adaptation between simulated and real-world data together with the absence of distinction between train and test datasets.
We present a system based on multiple environments in which agents are trained simultaneously, evaluating the behavior of the model in different scenarios.
arXiv Detail & Related papers (2020-05-13T14:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.