Reachability Analysis of a General Class of Neural Ordinary Differential
Equations
- URL: http://arxiv.org/abs/2207.06531v1
- Date: Wed, 13 Jul 2022 22:05:52 GMT
- Title: Reachability Analysis of a General Class of Neural Ordinary Differential
Equations
- Authors: Diego Manzanas Lopez, Patrick Musau, Nathaniel Hamilton, Taylor T.
Johnson
- Abstract summary: Continuous deep learning models, referred to as Neural Ordinary Differential Equations (Neural ODEs), have received considerable attention over the last several years.
Despite their burgeoning impact, there is a lack of formal analysis techniques for these systems.
We introduce a novel reachability framework that allows for the formal analysis of their behavior.
- Score: 7.774796410415387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continuous deep learning models, referred to as Neural Ordinary Differential
Equations (Neural ODEs), have received considerable attention over the last
several years. Despite their burgeoning impact, there is a lack of formal
analysis techniques for these systems. In this paper, we consider a general
class of neural ODEs with varying architectures and layers, and introduce a
novel reachability framework that allows for the formal analysis of their
behavior. The methods developed for the reachability analysis of neural ODEs
are implemented in a new tool called NNVODE. Specifically, our work extends an
existing neural network verification tool to support neural ODEs. We
demonstrate the capabilities and efficacy of our methods through the analysis
of a set of benchmarks that include neural ODEs used for classification, and in
control and dynamical systems, including an evaluation of the efficacy and
capabilities of our approach with respect to existing software tools within the
continuous-time systems reachability literature, when it is possible to do so.
Related papers
- AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - A Mathematical Framework, a Taxonomy of Modeling Paradigms, and a Suite of Learning Techniques for Neural-Symbolic Systems [22.42431063362667]
We introduce Neural-Symbolic Energy-Based Models (NeSy-EBMs), a unifying mathematical framework for discnative and generative modeling.
We utilize NeSy-EBMs to develop a taxonomy of modeling paradigms focusing on a system's neural-symbolic interface and reasoning capabilities.
We also present Neural Probabilistic Soft Logic (NeuPSL), an open-source NeSy-EBM library designed for scalability and expressivity.
arXiv Detail & Related papers (2024-07-12T21:26:21Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Embedding Capabilities of Neural ODEs [0.0]
We study input-output relations of neural ODEs using dynamical systems theory.
We prove several results about the exact embedding of maps in different neural ODE architectures in low and high dimension.
arXiv Detail & Related papers (2023-08-02T15:16:34Z) - Standalone Neural ODEs with Sensitivity Analysis [5.565364597145569]
This paper presents a continuous-depth neural ODE model capable of describing a full deep neural network.
We present a general formulation of the neural sensitivity problem and show how it is used in the NCG training.
Our evaluations demonstrate that our novel formulations lead to increased robustness and performance as compared to ResNet models.
arXiv Detail & Related papers (2022-05-27T12:16:53Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - On the application of Physically-Guided Neural Networks with Internal
Variables to Continuum Problems [0.0]
We present Physically-Guided Neural Networks with Internal Variables (PGNNIV)
universal physical laws are used as constraints in the neural network, in such a way that some neuron values can be interpreted as internal state variables of the system.
This endows the network with unraveling capacity, as well as better predictive properties such as faster convergence, fewer data needs and additional noise filtering.
We extend this new methodology to continuum physical problems, showing again its predictive and explanatory capacities when only using measurable values in the training set.
arXiv Detail & Related papers (2020-11-23T13:06:52Z) - DyNODE: Neural Ordinary Differential Equations for Dynamics Modeling in
Continuous Control [0.0]
We present a novel approach that captures the underlying dynamics of a system by incorporating control in a neural ordinary differential equation framework.
Results indicate that a simple DyNODE architecture when combined with an actor-critic reinforcement learning algorithm outperforms canonical neural networks.
arXiv Detail & Related papers (2020-09-09T12:56:58Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.