Neural Abstractions
- URL: http://arxiv.org/abs/2301.11683v1
- Date: Fri, 27 Jan 2023 12:38:09 GMT
- Title: Neural Abstractions
- Authors: Alessandro Abate, Alec Edwards, Mirco Giacobbe
- Abstract summary: We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
- Score: 72.42530499990028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel method for the safety verification of nonlinear dynamical
models that uses neural networks to represent abstractions of their dynamics.
Neural networks have extensively been used before as approximators; in this
work, we make a step further and use them for the first time as abstractions.
For a given dynamical model, our method synthesises a neural network that
overapproximates its dynamics by ensuring an arbitrarily tight, formally
certified bound on the approximation error. For this purpose, we employ a
counterexample-guided inductive synthesis procedure. We show that this produces
a neural ODE with non-deterministic disturbances that constitutes a formal
abstraction of the concrete model under analysis. This guarantees a fundamental
property: if the abstract model is safe, i.e., free from any initialised
trajectory that reaches an undesirable state, then the concrete model is also
safe. By using neural ODEs with ReLU activation functions as abstractions, we
cast the safety verification problem for nonlinear dynamical models into that
of hybrid automata with affine dynamics, which we verify using SpaceEx. We
demonstrate that our approach performs comparably to the mature tool Flow* on
existing benchmark nonlinear models. We additionally demonstrate and that it is
effective on models that do not exhibit local Lipschitz continuity, which are
out of reach to the existing technologies.
Related papers
- The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning [37.387280102209274]
offline reinforcement learning aims to enable agents to be trained from pre-collected datasets, however, this comes with the added challenge of estimating the value of behavior not covered in the dataset.
Model-based methods offer a solution by allowing agents to collect additional synthetic data via rollouts in a learned dynamics model.
However, if the learned dynamics model is replaced by the true error-free dynamics, existing model-based methods completely fail.
We propose Reach-Aware Value Learning (RAVL), a simple and robust method that directly addresses the edge-of-reach problem.
arXiv Detail & Related papers (2024-02-19T20:38:00Z) - Symmetry-regularized neural ordinary differential equations [0.0]
This paper introduces new conservation relations in Neural ODEs using Lie symmetries in both the hidden state dynamics and the back propagation dynamics.
These conservation laws are then incorporated into the loss function as additional regularization terms, potentially enhancing the physical interpretability and generalizability of the model.
New loss functions are constructed from these conservation relations, demonstrating the applicability symmetry-regularized Neural ODE in typical modeling tasks.
arXiv Detail & Related papers (2023-11-28T09:27:44Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Learning solution of nonlinear constitutive material models using
physics-informed neural networks: COMM-PINN [0.0]
We apply physics-informed neural networks to solve the relations for nonlinear, path-dependent material behavior.
One advantage of this work is that it bypasses the repetitive Newton iterations needed to solve nonlinear equations in complex material models.
arXiv Detail & Related papers (2023-04-10T19:58:49Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Bridging the Model-Reality Gap with Lipschitz Network Adaptation [22.499090318313662]
As robots venture into the real world, they are subject to unmodeled dynamics and disturbances.
Traditional model-based control approaches have been proven successful in relatively static and known operating environments.
We propose a method that bridges the model-reality gap and enables the application of model-based approaches even if dynamic uncertainties are present.
arXiv Detail & Related papers (2021-12-07T15:12:49Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Automated and Formal Synthesis of Neural Barrier Certificates for
Dynamical Models [70.70479436076238]
We introduce an automated, formal, counterexample-based approach to synthesise Barrier Certificates (BC)
The approach is underpinned by an inductive framework, which manipulates a candidate BC structured as a neural network, and a sound verifier, which either certifies the candidate's validity or generates counter-examples.
The outcomes show that we can synthesise sound BCs up to two orders of magnitude faster, with in particular a stark speedup on the verification engine.
arXiv Detail & Related papers (2020-07-07T07:39:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.