Reachable Polyhedral Marching (RPM): A Safety Verification Algorithm for
Robotic Systems with Deep Neural Network Components
- URL: http://arxiv.org/abs/2011.11609v2
- Date: Thu, 1 Apr 2021 04:17:21 GMT
- Title: Reachable Polyhedral Marching (RPM): A Safety Verification Algorithm for
Robotic Systems with Deep Neural Network Components
- Authors: Joseph A. Vincent, Mac Schwager
- Abstract summary: We present a method for computing exact reachable sets for deep neural networks with rectified linear unit (ReLU) activation.
Our algorithm can compute both forward and backward reachable sets for a ReLU network iterated over multiple time steps.
We demonstrate our algorithm on safety verification of the ACAS Xu aircraft advisory system.
- Score: 20.595032143044506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method for computing exact reachable sets for deep neural
networks with rectified linear unit (ReLU) activation. Our method is
well-suited for use in rigorous safety analysis of robotic perception and
control systems with deep neural network components. Our algorithm can compute
both forward and backward reachable sets for a ReLU network iterated over
multiple time steps, as would be found in a perception-action loop in a robotic
system. Our algorithm is unique in that it builds the reachable sets by
incrementally enumerating polyhedral cells in the input space, rather than
iterating layer-by-layer through the network as in other methods. If an unsafe
cell is found, our algorithm can return this result without completing the full
reachability computation, thus giving an anytime property that accelerates
safety verification. In addition, our method requires less memory during
execution compared to existing methods where memory can be a limiting factor.
We demonstrate our algorithm on safety verification of the ACAS Xu aircraft
advisory system. We find unsafe actions many times faster than the fastest
existing method and certify no unsafe actions exist in about twice the time of
the existing method. We also compute forward and backward reachable sets for a
learned model of pendulum dynamics over a 50 time step horizon in 87s on a
laptop computer. Algorithm source code:
https://github.com/StanfordMSL/Neural-Network-Reach.
Related papers
- Make Shuffling Great Again: A Side-Channel Resistant Fisher-Yates Algorithm for Protecting Neural Networks [4.734824660843964]
We propose a design of an SCA-secure version of the Fisher-Yates algorithm.
We experimentally evaluate that the countermeasure is effective against SCA.
arXiv Detail & Related papers (2025-01-01T10:46:22Z) - Attribution Patching Outperforms Automated Circuit Discovery [3.8695554579762814]
We show that a simple method based on attribution patching outperforms all existing methods.
We apply a linear approximation to activation patching to estimate the importance of each edge in the computational subgraph.
arXiv Detail & Related papers (2023-10-16T12:34:43Z) - A computationally lightweight safe learning algorithm [1.9295598343317182]
We propose a safe learning algorithm that provides probabilistic safety guarantees but leverages the Nadaraya-Watson estimator.
We provide theoretical guarantees for the estimates, embed them into a safe learning algorithm, and show numerical experiments on a simulated seven-degrees-of-freedom robot manipulator.
arXiv Detail & Related papers (2023-09-07T12:21:22Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Reachable Polyhedral Marching (RPM): An Exact Analysis Tool for
Deep-Learned Control Systems [20.595032143044506]
We present a tool for computing exact forward and backward reachable sets of deep neural networks with rectified linear unit (ReLU) activation.
We then develop algorithms using this tool to compute invariant sets and regions of attraction (ROAs) for control systems with neural networks in the feedback loop.
arXiv Detail & Related papers (2022-10-15T17:15:53Z) - Revisiting Recursive Least Squares for Training Deep Neural Networks [10.44340837533087]
Recursive least squares (RLS) algorithms were once widely used for training small-scale neural networks, due to their fast convergence.
Previous RLS algorithms are unsuitable for training deep neural networks (DNNs), since they have high computational complexity and too many preconditions.
We propose three novel RLS optimization algorithms for training feedforward neural networks, convolutional neural networks and recurrent neural networks.
arXiv Detail & Related papers (2021-09-07T17:43:51Z) - Fast Falsification of Neural Networks using Property Directed Testing [0.1529342790344802]
We propose a falsification algorithm for neural networks that directs the search for a counterexample.
Our algorithm uses a derivative-free sampling-based optimization method.
We show that our falsification procedure detects all the unsafe instances that other verification tools also report as unsafe.
arXiv Detail & Related papers (2021-04-26T09:16:27Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.