Detecting and Mitigating System-Level Anomalies of Vision-Based Controllers
- URL: http://arxiv.org/abs/2309.13475v5
- Date: Fri, 27 Sep 2024 21:02:47 GMT
- Title: Detecting and Mitigating System-Level Anomalies of Vision-Based Controllers
- Authors: Aryaman Gupta, Kaustav Chakraborty, Somil Bansal,
- Abstract summary: Vision-based controllers can make erroneous predictions when faced with novel or out-of-distribution inputs.
In this work, we introduce a run-time anomaly monitor to detect and mitigate such closed-loop, system-level failures.
We validate the proposed approach on an autonomous aircraft taxiing system that uses a vision-based controller for taxiing.
- Score: 7.095058159492494
- License:
- Abstract: Autonomous systems, such as self-driving cars and drones, have made significant strides in recent years by leveraging visual inputs and machine learning for decision-making and control. Despite their impressive performance, these vision-based controllers can make erroneous predictions when faced with novel or out-of-distribution inputs. Such errors can cascade to catastrophic system failures and compromise system safety. In this work, we introduce a run-time anomaly monitor to detect and mitigate such closed-loop, system-level failures. Specifically, we leverage a reachability-based framework to stress-test the vision-based controller offline and mine its system-level failures. This data is then used to train a classifier that is leveraged online to flag inputs that might cause system breakdowns. The anomaly detector highlights issues that transcend individual modules and pertain to the safety of the overall system. We also design a fallback controller that robustly handles these detected anomalies to preserve system safety. We validate the proposed approach on an autonomous aircraft taxiing system that uses a vision-based controller for taxiing. Our results show the efficacy of the proposed approach in identifying and handling system-level anomalies, outperforming methods such as prediction error-based detection, and ensembling, thereby enhancing the overall safety and robustness of autonomous systems.
Related papers
- Enhancing Functional Safety in Automotive AMS Circuits through Unsupervised Machine Learning [9.100418852199082]
We propose a novel framework based on unsupervised machine learning for early anomaly detection in AMS circuits.
The proposed approach involves injecting anomalies at various circuit locations and individual components to create a diverse and comprehensive anomaly dataset.
By monitoring the system behavior under these anomalous conditions, we capture the propagation of anomalies and their effects at different abstraction levels.
arXiv Detail & Related papers (2024-04-02T04:33:03Z) - Interactive System-wise Anomaly Detection [66.3766756452743]
Anomaly detection plays a fundamental role in various applications.
It is challenging for existing methods to handle the scenarios where the instances are systems whose characteristics are not readily observed as data.
We develop an end-to-end approach which includes an encoder-decoder module that learns system embeddings.
arXiv Detail & Related papers (2023-04-21T02:20:24Z) - In-Distribution Barrier Functions: Self-Supervised Policy Filters that
Avoid Out-of-Distribution States [84.24300005271185]
We propose a control filter that wraps any reference policy and effectively encourages the system to stay in-distribution with respect to offline-collected safe demonstrations.
Our method is effective for two different visuomotor control tasks in simulation environments, including both top-down and egocentric view settings.
arXiv Detail & Related papers (2023-01-27T22:28:19Z) - Discovering Closed-Loop Failures of Vision-Based Controllers via Reachability Analysis [7.679478106628509]
Machine learning driven image-based controllers allow robotic systems to take intelligent actions based on the visual feedback from their environment.
Existing methods leverage simulation-based testing (or falsification) to find the failures of vision-based controllers.
In this work, we cast the problem of finding closed-loop vision failures as a Hamilton-Jacobi (HJ) reachability problem.
arXiv Detail & Related papers (2022-11-04T20:22:58Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Monitoring of Perception Systems: Deterministic, Probabilistic, and
Learning-based Fault Detection and Identification [21.25149064251918]
We formalize the problem of runtime fault detection and identification in perception systems.
We provide a set of deterministic, probabilistic, and learning-based algorithms that use diagnostic graphs to perform fault detection and identification.
We conclude the paper with an experimental evaluation, which recreates several realistic failure modes in the LGSVL open-source autonomous driving simulator.
arXiv Detail & Related papers (2022-05-22T19:08:45Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Monitoring and Diagnosability of Perception Systems [21.25149064251918]
We propose a mathematical model for runtime monitoring and fault detection and identification in perception systems.
We demonstrate our monitoring system, dubbed PerSyS, in realistic simulations using the LGSVL self-driving simulator and the Apollo Auto autonomy software stack.
arXiv Detail & Related papers (2020-11-11T23:03:14Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Monitoring and Diagnosability of Perception Systems [21.25149064251918]
Perception is a critical component of high-integrity applications of robotics and autonomous systems, such as self-driving cars.
Despite the paramount importance of perception systems, there is no formal approach for system-level monitoring.
We propose a mathematical model for runtime monitoring and fault detection of perception systems.
arXiv Detail & Related papers (2020-05-24T18:09:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.