Towards Global Neural Network Abstractions with Locally-Exact
Reconstruction
- URL: http://arxiv.org/abs/2210.12054v2
- Date: Fri, 31 Mar 2023 14:06:30 GMT
- Title: Towards Global Neural Network Abstractions with Locally-Exact
Reconstruction
- Authors: Edoardo Manino, Iury Bessa, Lucas Cordeiro
- Abstract summary: We propose Global Interval Neural Network Abstractions with Center-Exact Reconstruction (GINNACER)
Our novel abstraction technique produces sound over-approximation bounds over the whole input domain while guaranteeing exact reconstructions for any given local input.
Our experiments show that GINNACER is several orders of magnitude tighter than state-of-the-art global abstraction techniques, while being competitive with local ones.
- Score: 2.1915057426589746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are a powerful class of non-linear functions. However, their
black-box nature makes it difficult to explain their behaviour and certify
their safety. Abstraction techniques address this challenge by transforming the
neural network into a simpler, over-approximated function. Unfortunately,
existing abstraction techniques are slack, which limits their applicability to
small local regions of the input domain. In this paper, we propose Global
Interval Neural Network Abstractions with Center-Exact Reconstruction
(GINNACER). Our novel abstraction technique produces sound over-approximation
bounds over the whole input domain while guaranteeing exact reconstructions for
any given local input. Our experiments show that GINNACER is several orders of
magnitude tighter than state-of-the-art global abstraction techniques, while
being competitive with local ones.
Related papers
- Neural Experts: Mixture of Experts for Implicit Neural Representations [41.395193251292895]
Implicit neural representations (INRs) have proven effective in various tasks including image, shape, audio, and video reconstruction.
We propose a mixture of experts (MoE) implicit neural representation approach that enables learning local piece-wise continuous functions.
We show that incorporating a mixture of experts architecture into existing INR formulations provides a boost in speed, accuracy, and memory requirements.
arXiv Detail & Related papers (2024-10-29T01:11:25Z) - From NeurODEs to AutoencODEs: a mean-field control framework for
width-varying Neural Networks [68.8204255655161]
We propose a new type of continuous-time control system, called AutoencODE, based on a controlled field that drives dynamics.
We show that many architectures can be recovered in regions where the loss function is locally convex.
arXiv Detail & Related papers (2023-07-05T13:26:17Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - A Note on the Implicit Bias Towards Minimal Depth of Deep Neural
Networks [11.739219085726006]
A central aspect that enables the success of these systems is the ability to train deep models instead of wide shallow ones.
While training deep neural networks repetitively achieves superior performance against their shallow counterparts, an understanding of the role of depth in representation learning is still lacking.
arXiv Detail & Related papers (2022-02-18T05:21:28Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Locality Guided Neural Networks for Explainable Artificial Intelligence [12.435539489388708]
We propose a novel algorithm for back propagation, called Locality Guided Neural Network(LGNN)
LGNN preserves locality between neighbouring neurons within each layer of a deep network.
In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100.
arXiv Detail & Related papers (2020-07-12T23:45:51Z) - DeepAbstract: Neural Network Abstraction for Accelerating Verification [0.0]
We introduce an abstraction framework applicable to fully-connected feed-forward neural networks based on clustering of neurons that behave similarly on some inputs.
We show how the abstraction reduces the size of the network, while preserving its accuracy, and how verification results on the abstract network can be transferred back to the original network.
arXiv Detail & Related papers (2020-06-24T13:51:03Z) - Dense Residual Network: Enhancing Global Dense Feature Flow for
Character Recognition [75.4027660840568]
This paper explores how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers.
Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN) for text recognition.
arXiv Detail & Related papers (2020-01-23T06:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.