Approximate Bisimulation Relations for Neural Networks and Application
to Assured Neural Network Compression
- URL: http://arxiv.org/abs/2202.01214v1
- Date: Wed, 2 Feb 2022 16:21:19 GMT
- Title: Approximate Bisimulation Relations for Neural Networks and Application
to Assured Neural Network Compression
- Authors: Weiming Xiang, Zhongzhu Shao
- Abstract summary: We propose a concept of approximate bisimulation relation for feedforward neural networks.
A novel neural network merging method is developed to compute the approximate bisimulation error between two neural networks.
- Score: 3.0839245814393728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a concept of approximate bisimulation relation for
feedforward neural networks. In the framework of approximate bisimulation
relation, a novel neural network merging method is developed to compute the
approximate bisimulation error between two neural networks based on
reachability analysis of neural networks. The developed method is able to
quantitatively measure the distance between the outputs of two neural networks
with the same inputs. Then, we apply the approximate bisimulation relation
results to perform neural networks model reduction and compute the compression
precision, i.e., assured neural networks compression. At last, using the
assured neural network compression, we accelerate the verification processes of
ACAS Xu neural networks to illustrate the effectiveness and advantages of our
proposed approximate bisimulation approach.
Related papers
- Stochastic Gradient Descent for Two-layer Neural Networks [2.0349026069285423]
This paper presents a study on the convergence rates of the descent (SGD) algorithm when applied to overparameterized two-layer neural networks.
Our approach combines the Tangent Kernel (NTK) approximation with convergence analysis in the Reproducing Kernel Space (RKHS) generated by NTK.
Our research framework enables us to explore the intricate interplay between kernel methods and optimization processes, shedding light on the dynamics and convergence properties of neural networks.
arXiv Detail & Related papers (2024-07-10T13:58:57Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Decoding Neuronal Networks: A Reservoir Computing Approach for
Predicting Connectivity and Functionality [0.0]
Our model deciphers data obtained from electrophysiological measurements of neuronal cultures.
Notably, our model outperforms common methods like Cross-Correlation and Transfer-Entropy in predicting the network's connectivity map.
arXiv Detail & Related papers (2023-11-06T14:28:11Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Guaranteed Quantization Error Computation for Neural Network Model
Compression [2.610470075814367]
Neural network model compression techniques can address the computation issue of deep neural networks on embedded devices in industrial systems.
A merged neural network is built from a feedforward neural network and its quantized version to produce the exact output difference between two neural networks.
arXiv Detail & Related papers (2023-04-26T20:21:54Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Nonlinear Weighted Directed Acyclic Graph and A Priori Estimates for
Neural Networks [9.43712471169533]
We first present a novel graph theoretical formulation of neural network models, including fully connected, residual network(ResNet) and densely connected networks(DenseNet)
We extend the error analysis of the population risk for two layer networkciteew 2019prioriTwo and ResNetcitee 2019prioriRes to DenseNet, and show further that for neural networks satisfying certain mild conditions, similar estimates can be obtained.
arXiv Detail & Related papers (2021-03-30T13:54:33Z) - ResiliNet: Failure-Resilient Inference in Distributed Neural Networks [56.255913459850674]
We introduce ResiliNet, a scheme for making inference in distributed neural networks resilient to physical node failures.
Failout simulates physical node failure conditions during training using dropout, and is specifically designed to improve the resiliency of distributed neural networks.
arXiv Detail & Related papers (2020-02-18T05:58:24Z) - Neural Rule Ensembles: Encoding Sparse Feature Interactions into Neural
Networks [3.7277730514654555]
We use decision trees to capture relevant features and their interactions and define a mapping to encode extracted relationships into a neural network.
At the same time through feature selection it enables learning of compact representations compared to state of the art tree-based approaches.
arXiv Detail & Related papers (2020-02-11T11:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.