NEUROSPF: A tool for the Symbolic Analysis of Neural Networks
- URL: http://arxiv.org/abs/2103.00124v1
- Date: Sat, 27 Feb 2021 04:28:11 GMT
- Title: NEUROSPF: A tool for the Symbolic Analysis of Neural Networks
- Authors: Muhammad Usman, Yannic Noller, Corina Pasareanu, Youcheng Sun, Divya
Gopinath
- Abstract summary: This paper presents NEUROSPF, a tool for the symbolic analysis of neural networks.
Given a trained neural network model, the tool extracts the architecture and model parameters and translates them into a Java representation.
NEUROSPF encodes specialized peer classes for parsing the model's parameters, thereby enabling efficient analysis.
- Score: 10.129874872336764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents NEUROSPF, a tool for the symbolic analysis of neural
networks. Given a trained neural network model, the tool extracts the
architecture and model parameters and translates them into a Java
representation that is amenable for analysis using the Symbolic PathFinder
symbolic execution tool. Notably, NEUROSPF encodes specialized peer classes for
parsing the model's parameters, thereby enabling efficient analysis. With
NEUROSPF the user has the flexibility to specify either the inputs or the
network internal parameters as symbolic, promoting the application of program
analysis and testing approaches from software engineering to the field of
machine learning. For instance, NEUROSPF can be used for coverage-based testing
and test generation, finding adversarial examples and also constraint-based
repair of neural networks, thus improving the reliability of neural networks
and of the applications that use them. Video URL: https://youtu.be/seal8fG78LI
Related papers
- Simultaneous Weight and Architecture Optimization for Neural Networks [6.2241272327831485]
We introduce a novel neural network training framework that transforms the process by learning architecture and parameters simultaneously with gradient descent.
Central to our approach is a multi-scale encoder-decoder, in which the encoder embeds pairs of neural networks with similar functionalities close to each other.
Experiments demonstrate that our framework can discover sparse and compact neural networks maintaining a high performance.
arXiv Detail & Related papers (2024-10-10T19:57:36Z) - Statistical tuning of artificial neural network [0.0]
This study introduces methods to enhance the understanding of neural networks, focusing specifically on models with a single hidden layer.
We propose statistical tests to assess the significance of input neurons and introduce algorithms for dimensionality reduction.
This research advances the field of Explainable Artificial Intelligence by presenting robust statistical frameworks for interpreting neural networks.
arXiv Detail & Related papers (2024-09-24T19:47:03Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Deep Neural Networks and Neuro-Fuzzy Networks for Intellectual Analysis
of Economic Systems [0.0]
We consider approaches for time series forecasting based on deep neural networks and neuro-fuzzy nets.
This paper presents also an overview of approaches for incorporating rule-based methodology into deep learning neural networks.
arXiv Detail & Related papers (2020-11-11T06:21:08Z) - SOCRATES: Towards a Unified Platform for Neural Network Analysis [7.318255652722096]
We aim to build a unified framework for developing techniques to analyze neural networks.
We develop a platform called SOCRATES which supports a standardized format for a variety of neural network models.
Experiment results show that our platform can handle a wide range of networks models and properties.
arXiv Detail & Related papers (2020-07-22T05:18:57Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.