A Transition System Abstraction Framework for Neural Network Dynamical
System Models
- URL: http://arxiv.org/abs/2402.11739v1
- Date: Sun, 18 Feb 2024 23:49:18 GMT
- Title: A Transition System Abstraction Framework for Neural Network Dynamical
System Models
- Authors: Yejiang Yang, Zihao Mo, Hoang-Dung Tran, and Weiming Xiang
- Abstract summary: This paper proposes a transition system abstraction framework for neural network dynamical system models.
The framework is able to abstract a data-driven neural network model into a transition system, making the neural network model interpretable.
- Score: 2.414910571475855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a transition system abstraction framework for neural
network dynamical system models to enhance the model interpretability, with
applications to complex dynamical systems such as human behavior learning and
verification. To begin with, the localized working zone will be segmented into
multiple localized partitions under the data-driven Maximum Entropy (ME)
partitioning method. Then, the transition matrix will be obtained based on the
set-valued reachability analysis of neural networks. Finally, applications to
human handwriting dynamics learning and verification are given to validate our
proposed abstraction framework, which demonstrates the advantages of enhancing
the interpretability of the black-box model, i.e., our proposed framework is
able to abstract a data-driven neural network model into a transition system,
making the neural network model interpretable through verifying specifications
described in Computational Tree Logic (CTL) languages.
Related papers
- Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Interpretable learning of effective dynamics for multiscale systems [5.754251195342313]
We propose a novel framework of Interpretable Learning Effective Dynamics (iLED)
iLED offers comparable accuracy to state-of-theart recurrent neural network-based approaches.
Our results show that the iLED framework can generate accurate predictions and obtain interpretable dynamics.
arXiv Detail & Related papers (2023-09-11T20:29:38Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Analyzing Populations of Neural Networks via Dynamical Model Embedding [10.455447557943463]
A core challenge in the interpretation of deep neural networks is identifying commonalities between the underlying algorithms implemented by distinct networks trained for the same task.
Motivated by this problem, we introduce DYNAMO, an algorithm that constructs low-dimensional manifold where each point corresponds to a neural network model, and two points are nearby if the corresponding neural networks enact similar high-level computational processes.
DYNAMO takes as input a collection of pre-trained neural networks and outputs a meta-model that emulates the dynamics of the hidden states as well as the outputs of any model in the collection.
arXiv Detail & Related papers (2023-02-27T19:00:05Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Physics-guided Deep Markov Models for Learning Nonlinear Dynamical
Systems with Uncertainty [6.151348127802708]
We propose a physics-guided framework, termed Physics-guided Deep Markov Model (PgDMM)
The proposed framework takes advantage of the expressive power of deep learning, while retaining the driving physics of the dynamical system.
arXiv Detail & Related papers (2021-10-16T16:35:12Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Constrained Block Nonlinear Neural Dynamical Models [1.3163098563588727]
Neural network modules conditioned by known priors can be effectively trained and combined to represent systems with nonlinear dynamics.
The proposed method consists of neural network blocks that represent input, state, and output dynamics with constraints placed on the network weights and system variables.
We evaluate the performance of the proposed architecture and training methods on system identification tasks for three nonlinear systems.
arXiv Detail & Related papers (2021-01-06T04:27:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.