Refining Perception Contracts: Case Studies in Vision-based Safe
Auto-landing
- URL: http://arxiv.org/abs/2311.08652v1
- Date: Wed, 15 Nov 2023 02:26:41 GMT
- Title: Refining Perception Contracts: Case Studies in Vision-based Safe
Auto-landing
- Authors: Yangge Li, Benjamin C Yang, Yixuan Jia, Daniel Zhuang, Sayan Mitra
- Abstract summary: Perception contracts provide a method for evaluating safety of control systems that use machine learning for perception.
This paper presents the analysis of two 6 and 12-dimensional flight control systems that use multi-stage, heterogeneous, ML-enabled perception.
- Score: 2.3415799537084725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Perception contracts provide a method for evaluating safety of control
systems that use machine learning for perception. A perception contract is a
specification for testing the ML components, and it gives a method for proving
end-to-end system-level safety requirements. The feasibility of contract-based
testing and assurance was established earlier in the context of straight lane
keeping: a 3-dimensional system with relatively simple dynamics. This paper
presents the analysis of two 6 and 12-dimensional flight control systems that
use multi-stage, heterogeneous, ML-enabled perception. The paper advances
methodology by introducing an algorithm for constructing data and requirement
guided refinement of perception contracts (DaRePC). The resulting analysis
provides testable contracts which establish the state and environment
conditions under which an aircraft can safety touchdown on the runway and a
drone can safely pass through a sequence of gates. It can also discover
conditions (e.g., low-horizon sun) that can possibly violate the safety of the
vision-based control system.
Related papers
- Verification of Visual Controllers via Compositional Geometric Transformations [49.81690518952909]
We introduce a novel verification framework for perception-based controllers that can generate outer-approximations of reachable sets.<n>We provide theoretical guarantees on the soundness of our method and demonstrate its effectiveness across benchmark control environments.
arXiv Detail & Related papers (2025-07-06T20:22:58Z) - SafeAuto: Knowledge-Enhanced Safe Autonomous Driving with Multimodal Foundation Models [63.71984266104757]
Multimodal Large Language Models (MLLMs) can process both visual and textual data.
We propose SafeAuto, a novel framework that enhances MLLM-based autonomous driving systems by incorporating both unstructured and structured knowledge.
arXiv Detail & Related papers (2025-02-28T21:53:47Z) - Learning Vision-Based Neural Network Controllers with Semi-Probabilistic Safety Guarantees [24.650302053973142]
We introduce a novel semi-probabilistic verification framework that integrates reachability analysis with conditional generative adversarial networks.
Next, we develop a gradient-based training approach that employs a novel safety loss function, safety-aware data-sampling strategy, and curriculum learning.
Empirical evaluations in X-Plane 11 airplane landing simulation, CARLA-simulated autonomous lane following, and F1Tenth lane following in a visually-rich miniature environment demonstrate the effectiveness of our method in achieving formal safety guarantees.
arXiv Detail & Related papers (2025-02-28T21:16:42Z) - What Makes and Breaks Safety Fine-tuning? A Mechanistic Study [64.9691741899956]
Safety fine-tuning helps align Large Language Models (LLMs) with human preferences for their safe deployment.
We design a synthetic data generation framework that captures salient aspects of an unsafe input.
Using this, we investigate three well-known safety fine-tuning methods.
arXiv Detail & Related papers (2024-07-14T16:12:57Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Detecting and Mitigating System-Level Anomalies of Vision-Based Controllers [7.095058159492494]
Vision-based controllers can make erroneous predictions when faced with novel or out-of-distribution inputs.
In this work, we introduce a run-time anomaly monitor to detect and mitigate such closed-loop, system-level failures.
We validate the proposed approach on an autonomous aircraft taxiing system that uses a vision-based controller for taxiing.
arXiv Detail & Related papers (2023-09-23T20:33:38Z) - In-Distribution Barrier Functions: Self-Supervised Policy Filters that
Avoid Out-of-Distribution States [84.24300005271185]
We propose a control filter that wraps any reference policy and effectively encourages the system to stay in-distribution with respect to offline-collected safe demonstrations.
Our method is effective for two different visuomotor control tasks in simulation environments, including both top-down and egocentric view settings.
arXiv Detail & Related papers (2023-01-27T22:28:19Z) - Assuring Safety of Vision-Based Swarm Formation Control [0.0]
We propose a technique for safety assurance of vision-based formation control.
We show how the convergence analysis of a standard quantized consensus algorithm can be adapted for the constructed quantizers.
arXiv Detail & Related papers (2022-10-03T14:47:32Z) - USC: Uncompromising Spatial Constraints for Safety-Oriented 3D Object Detectors in Autonomous Driving [7.355977594790584]
We consider the safety-oriented performance of 3D object detectors in autonomous driving contexts.
We present uncompromising spatial constraints (USC), which characterize a simple yet important localization requirement.
We incorporate the quantitative measures into common loss functions to enable safety-oriented fine-tuning for existing models.
arXiv Detail & Related papers (2022-09-21T14:03:08Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Safe Perception -- A Hierarchical Monitor Approach [0.0]
We propose a novel hierarchical monitoring approach for AI-based perception systems.
It can reliably detect detection misses, and at the same time has a very low false alarm rate.
arXiv Detail & Related papers (2022-08-01T13:09:24Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Learning Hybrid Control Barrier Functions from Data [66.37785052099423]
Motivated by the lack of systematic tools to obtain safe control laws for hybrid systems, we propose an optimization-based framework for learning certifiably safe control laws from data.
In particular, we assume a setting in which the system dynamics are known and in which data exhibiting safe system behavior is available.
arXiv Detail & Related papers (2020-11-08T23:55:02Z) - Runtime Safety Assurance Using Reinforcement Learning [37.61747231296097]
This paper aims to design a meta-controller capable of identifying unsafe situations with high accuracy.
We frame the design of RTSA with the Markov decision process (MDP) and use reinforcement learning (RL) to solve it.
arXiv Detail & Related papers (2020-10-20T20:54:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.