Assuring Safety of Vision-Based Swarm Formation Control
- URL: http://arxiv.org/abs/2210.00982v2
- Date: Wed, 27 Sep 2023 22:06:37 GMT
- Title: Assuring Safety of Vision-Based Swarm Formation Control
- Authors: Chiao Hsieh (1), Yubin Koh (1), Yangge Li (1), Sayan Mitra (1) ((1)
Coordinated Science Laboratory at the University of Illinois at
Urbana-Champaign)
- Abstract summary: We propose a technique for safety assurance of vision-based formation control.
We show how the convergence analysis of a standard quantized consensus algorithm can be adapted for the constructed quantizers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision-based formation control systems are attractive because they can use
inexpensive sensors and can work in GPS-denied environments. The safety
assurance for such systems is challenging: the vision component's accuracy
depends on the environment in complicated ways, these errors propagate through
the system and lead to incorrect control actions, and there exists no formal
specification for end-to-end reasoning. We address this problem and propose a
technique for safety assurance of vision-based formation control: First, we
propose a scheme for constructing quantizers that are consistent with
vision-based perception. Next, we show how the convergence analysis of a
standard quantized consensus algorithm can be adapted for the constructed
quantizers. We use the recently defined notion of perception contracts to
create error bounds on the actual vision-based perception pipeline using
sampled data from different ground truth states, environments, and weather
conditions. Specifically, we use a quantizer in logarithmic polar coordinates,
and we show that this quantizer is suitable for the constructed perception
contracts for the vision-based position estimation, where the error worsens
with respect to the absolute distance between agents. We build our formation
control algorithm with this nonuniform quantizer, and we prove its convergence
employing an existing result for quantized consensus.
Related papers
- Verification of Visual Controllers via Compositional Geometric Transformations [49.81690518952909]
We introduce a novel verification framework for perception-based controllers that can generate outer-approximations of reachable sets.<n>We provide theoretical guarantees on the soundness of our method and demonstrate its effectiveness across benchmark control environments.
arXiv Detail & Related papers (2025-07-06T20:22:58Z) - Automatically Adaptive Conformal Risk Control [49.95190019041905]
We propose a methodology for achieving approximate conditional control of statistical risks by adapting to the difficulty of test samples.
Our framework goes beyond traditional conditional risk control based on user-provided conditioning events to the algorithmic, data-driven determination of appropriate function classes for conditioning.
arXiv Detail & Related papers (2024-06-25T08:29:32Z) - Robust Collaborative Perception without External Localization and Clock Devices [52.32342059286222]
A consistent spatial-temporal coordination across multiple agents is fundamental for collaborative perception.
Traditional methods depend on external devices to provide localization and clock signals.
We propose a novel approach: aligning by recognizing the inherent geometric patterns within the perceptual data of various agents.
arXiv Detail & Related papers (2024-05-05T15:20:36Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Refining Perception Contracts: Case Studies in Vision-based Safe
Auto-landing [2.3415799537084725]
Perception contracts provide a method for evaluating safety of control systems that use machine learning for perception.
This paper presents the analysis of two 6 and 12-dimensional flight control systems that use multi-stage, heterogeneous, ML-enabled perception.
arXiv Detail & Related papers (2023-11-15T02:26:41Z) - Conformal Policy Learning for Sensorimotor Control Under Distribution
Shifts [61.929388479847525]
This paper focuses on the problem of detecting and reacting to changes in the distribution of a sensorimotor controller's observables.
The key idea is the design of switching policies that can take conformal quantiles as input.
We show how to design such policies by using conformal quantiles to switch between base policies with different characteristics.
arXiv Detail & Related papers (2023-11-02T17:59:30Z) - Safe Perception-Based Control under Stochastic Sensor Uncertainty using
Conformal Prediction [27.515056747751053]
We propose a perception-based control framework that quantifies estimation uncertainty of perception maps.
We also integrate these uncertainty representations into the control design.
We demonstrate the effectiveness of our proposed perception-based controller for a LiDAR-enabled F1/10th car.
arXiv Detail & Related papers (2023-04-01T01:45:53Z) - Safe Output Feedback Motion Planning from Images via Learned Perception
Modules and Contraction Theory [6.950510860295866]
We present a class of uncertain control-affine nonlinear systems which guarantees runtime safety and goal reachability.
We train a perception system that seeks to invert a subset of the state from an observation, and estimate an upper bound on the perception error.
Next, we use contraction theory to design a stabilizing state feedback controller and a convergent dynamic state observer.
We derive a bound on the trajectory tracking error when this controller is subjected to errors in the dynamics and incorrect state estimates.
arXiv Detail & Related papers (2022-06-14T02:03:27Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Safety Verification of Neural Network Controlled Systems [0.0]
We propose a system-level approach for verifying the safety of neural network controlled systems.
We assume a generic model for the controller that can capture both simple and complex behaviours.
We perform a reachability analysis that soundly approximates the reachable states of the overall system.
arXiv Detail & Related papers (2020-11-10T15:26:38Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Drawing together control landscape and tomography principles [0.2741266294612775]
The ability to control quantum systems using shaped fields as well as to infer the states of such controlled systems from measurement data are key tasks in the design and operation of quantum devices.
We relate the ability to control and reconstruct the full state of the system to the absence of singular controls, and show that for sufficiently long evolution times singular controls rarely occur.
We describe a learning algorithm for finding optimal controls that makes use of measurement data obtained from partially accessing the system.
arXiv Detail & Related papers (2020-04-06T15:17:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.