Verification of Image-based Neural Network Controllers Using Generative
Models
- URL: http://arxiv.org/abs/2105.07091v1
- Date: Fri, 14 May 2021 23:18:05 GMT
- Title: Verification of Image-based Neural Network Controllers Using Generative
Models
- Authors: Sydney M. Katz, Anthony L. Corso, Christopher A. Strong, Mykel J.
Kochenderfer
- Abstract summary: We propose a method to train a generative adversarial network (GAN) to map states to plausible input images.
By concatenating the generator network with the control network, we obtain a network with a low-dimensional input space.
We apply our approach to provide safety guarantees for an image-based neural network controller for an autonomous aircraft taxi problem.
- Score: 30.34898838361206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are often used to process information from image-based
sensors to produce control actions. While they are effective for this task, the
complex nature of neural networks makes their output difficult to verify and
predict, limiting their use in safety-critical systems. For this reason, recent
work has focused on combining techniques in formal methods and reachability
analysis to obtain guarantees on the closed-loop performance of neural network
controllers. However, these techniques do not scale to the high-dimensional and
complicated input space of image-based neural network controllers. In this
work, we propose a method to address these challenges by training a generative
adversarial network (GAN) to map states to plausible input images. By
concatenating the generator network with the control network, we obtain a
network with a low-dimensional input space. This insight allows us to use
existing closed-loop verification tools to obtain formal guarantees on the
performance of image-based controllers. We apply our approach to provide safety
guarantees for an image-based neural network controller for an autonomous
aircraft taxi problem. We guarantee that the controller will keep the aircraft
on the runway and guide the aircraft towards the center of the runway. The
guarantees we provide are with respect to the set of input images modeled by
our generator network, so we provide a recall metric to evaluate how well the
generator captures the space of plausible images.
Related papers
- Scalable and Interpretable Verification of Image-based Neural Network Controllers for Autonomous Vehicles [3.2540854278211864]
Image-based neural network controllers in autonomous vehicles often struggle with high-dimensional inputs, computational inefficiency, and a lack of explainability.
We propose SEVIN, a framework that leverages a Variational Autoencoders (VAE) to encode high-dimensional images into a lower-dimensional, explainable latent space.
We show that SEVIN achieves efficient and scalable verification while providing explainable insights into controller behavior.
arXiv Detail & Related papers (2025-01-23T16:46:45Z) - Network Inversion of Convolutional Neural Nets [3.004632712148892]
Neural networks have emerged as powerful tools across various applications, yet their decision-making process often remains opaque.
Network inversion techniques offer a solution by allowing us to peek inside these black boxes.
This paper presents a simple yet effective approach to network inversion using a meticulously conditioned generator.
arXiv Detail & Related papers (2024-07-25T12:53:21Z) - Scalable Surrogate Verification of Image-based Neural Network Control Systems using Composition and Unrolling [9.633494094538017]
We build on work that considers a surrogate verification approach, training a conditional generative adversarial network (cGAN) as an image generator in place of the real world.
We overcome one-step error by composing the system's dynamics along with the cGAN and neural network controller.
We reduce multi-step error by repeating the single-step composition, essentially unrolling multiple steps of the control loop into a large neural network.
arXiv Detail & Related papers (2024-05-28T19:56:53Z) - ControlNet-XS: Rethinking the Control of Text-to-Image Diffusion Models as Feedback-Control Systems [19.02295657801464]
In this work, we take an existing controlling network (ControlNet) and change the communication between the controlling network and the generation process to be of high-frequency and with large-bandwidth.
We outperform state-of-the-art approaches for pixel-level guidance, such as depth, canny-edges, and semantic segmentation, and are on a par for loose keypoint-guidance of human poses.
All code and pre-trained models will be made publicly available.
arXiv Detail & Related papers (2023-12-11T17:58:06Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Paint and Distill: Boosting 3D Object Detection with Semantic Passing
Network [70.53093934205057]
3D object detection task from lidar or camera sensors is essential for autonomous driving.
We propose a novel semantic passing framework, named SPNet, to boost the performance of existing lidar-based 3D detection models.
arXiv Detail & Related papers (2022-07-12T12:35:34Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Generating Probabilistic Safety Guarantees for Neural Network
Controllers [30.34898838361206]
We use a dynamics model to determine the output properties that must hold for a neural network controller to operate safely.
We develop an adaptive verification approach to efficiently generate an overapproximation of the neural network policy.
We show that our method is able to generate meaningful probabilistic safety guarantees for aircraft collision avoidance neural networks.
arXiv Detail & Related papers (2021-03-01T18:48:21Z) - Image Generation for Efficient Neural Network Training in Autonomous
Drone Racing [15.114944019221456]
In autonomous drone racing, one must accomplish this task by flying fully autonomously in an unknown environment.
Traditional object detection algorithms based on colour or geometry tend to fail.
In this work, a semi-synthetic dataset generation method is proposed, using a combination of real background images and randomised 3D renders of the gates.
arXiv Detail & Related papers (2020-08-06T12:07:36Z) - Towards a Neural Graphics Pipeline for Controllable Image Generation [96.11791992084551]
We present Neural Graphics Pipeline (NGP), a hybrid generative model that brings together neural and traditional image formation models.
NGP decomposes the image into a set of interpretable appearance feature maps, uncovering direct control handles for controllable image generation.
We demonstrate the effectiveness of our approach on controllable image generation of single-object scenes.
arXiv Detail & Related papers (2020-06-18T14:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.