On the Trade-off Between Efficiency and Precision of Neural Abstraction
- URL: http://arxiv.org/abs/2307.15546v2
- Date: Mon, 2 Oct 2023 09:56:53 GMT
- Title: On the Trade-off Between Efficiency and Precision of Neural Abstraction
- Authors: Alec Edwards, Mirco Giacobbe, Alessandro Abate
- Abstract summary: Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
- Score: 62.046646433536104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural abstractions have been recently introduced as formal approximations of
complex, nonlinear dynamical models. They comprise a neural ODE and a certified
upper bound on the error between the abstract neural network and the concrete
dynamical model. So far neural abstractions have exclusively been obtained as
neural networks consisting entirely of $ReLU$ activation functions, resulting
in neural ODE models that have piecewise affine dynamics, and which can be
equivalently interpreted as linear hybrid automata. In this work, we observe
that the utility of an abstraction depends on its use: some scenarios might
require coarse abstractions that are easier to analyse, whereas others might
require more complex, refined abstractions. We therefore consider neural
abstractions of alternative shapes, namely either piecewise constant or
nonlinear non-polynomial (specifically, obtained via sigmoidal activations). We
employ formal inductive synthesis procedures to generate neural abstractions
that result in dynamical models with these semantics. Empirically, we
demonstrate the trade-off that these different neural abstraction templates
have vis-a-vis their precision and synthesis time, as well as the time required
for their safety verification (done via reachability computation). We improve
existing synthesis techniques to enable abstraction of higher-dimensional
models, and additionally discuss the abstraction of complex neural ODEs to
improve the efficiency of reachability analysis for these models.
Related papers
- Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Extreme sparsification of physics-augmented neural networks for
interpretable model discovery in mechanics [0.0]
We propose to train regularized physics-augmented neural network-based models utilizing a smoothed version of $L0$-regularization.
We show that the method can reliably obtain interpretable and trustworthy models for compressible and incompressible thermodynamicity, yield functions, and hardening models for elastoplasticity.
arXiv Detail & Related papers (2023-10-05T16:28:58Z) - Trainability, Expressivity and Interpretability in Gated Neural ODEs [0.0]
We introduce a novel measure of expressivity which probes the capacity of a neural network to generate complex trajectories.
We show how reduced-dimensional gnODEs retain their modeling power while greatly improving interpretability.
We also demonstrate the benefit of gating in nODEs on several real-world tasks.
arXiv Detail & Related papers (2023-07-12T18:29:01Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Neuro-Symbolic AI: An Emerging Class of AI Workloads and their
Characterization [0.9949801888214526]
Neuro-symbolic artificial intelligence is a novel area of AI research.
We describe and analyze the performance characteristics of three recent neuro-symbolic models.
arXiv Detail & Related papers (2021-09-13T17:19:59Z) - Modelling Neuronal Behaviour with Time Series Regression: Recurrent
Neural Networks on C. Elegans Data [0.0]
We show how the nervous system of C. Elegans can be modelled and simulated with data-driven models using different neural network architectures.
We show that GRU models with a hidden layer size of 4 units are able to accurately reproduce with high accuracy the system's response to very different stimuli.
arXiv Detail & Related papers (2021-07-01T10:39:30Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.