Self-Repairing Neural Networks: Provable Safety for Deep Networks via
Dynamic Repair
- URL: http://arxiv.org/abs/2107.11445v1
- Date: Fri, 23 Jul 2021 20:08:52 GMT
- Title: Self-Repairing Neural Networks: Provable Safety for Deep Networks via
Dynamic Repair
- Authors: Klas Leino, Aymeric Fromherz, Ravi Mangal, Matt Fredrikson, Bryan
Parno, Corina P\u{a}s\u{a}reanu
- Abstract summary: We propose a way to construct neural network classifiers that dynamically repair violations of non-relational safety constraints.
Our approach is based on a novel self-repairing layer, which provably yields safe outputs.
We show that our approach can be implemented using vectorized computations that execute efficiently on a GPU.
- Score: 16.208330991060976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are increasingly being deployed in contexts where safety is a
critical concern. In this work, we propose a way to construct neural network
classifiers that dynamically repair violations of non-relational safety
constraints called safe ordering properties. Safe ordering properties relate
requirements on the ordering of a network's output indices to conditions on
their input, and are sufficient to express most useful notions of
non-relational safety for classifiers. Our approach is based on a novel
self-repairing layer, which provably yields safe outputs regardless of the
characteristics of its input. We compose this layer with an existing network to
construct a self-repairing network (SR-Net), and show that in addition to
providing safe outputs, the SR-Net is guaranteed to preserve the accuracy of
the original network. Notably, our approach is independent of the size and
architecture of the network being repaired, depending only on the specified
property and the dimension of the network's output; thus it is scalable to
large state-of-the-art networks. We show that our approach can be implemented
using vectorized computations that execute efficiently on a GPU, introducing
run-time overhead of less than one millisecond on current hardware -- even on
large, widely-used networks containing hundreds of thousands of neurons and
millions of parameters.
Related papers
- Exploring Neural Network Pruning with Screening Methods [3.443622476405787]
Modern deep learning models have tens of millions of parameters which makes the inference processes resource-intensive.
This paper proposes and evaluates a network pruning framework that eliminates non-essential parameters.
The proposed framework produces competitive lean networks compared to the original networks.
arXiv Detail & Related papers (2025-02-11T02:31:04Z) - A Generalization of Continuous Relaxation in Structured Pruning [0.3277163122167434]
Trends indicate that deeper and larger neural networks with an increasing number of parameters achieve higher accuracy than smaller neural networks.
We generalize structured pruning with algorithms for network augmentation, pruning, sub-network collapse and removal.
The resulting CNN executes efficiently on GPU hardware without computationally expensive sparse matrix operations.
arXiv Detail & Related papers (2023-08-28T14:19:13Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - Efficient Global Robustness Certification of Neural Networks via
Interleaving Twin-Network Encoding [8.173681464694651]
We formulate the global robustness certification for neural networks with ReLU activation functions as a mixed-integer linear programming (MILP) problem.
Our approach includes a novel interleaving twin-network encoding scheme, where two copies of the neural network are encoded side-by-side.
A case study of closed-loop control safety verification is conducted, and demonstrates the importance and practicality of our approach.
arXiv Detail & Related papers (2022-03-26T19:23:37Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Improving Neural Network Robustness through Neighborhood Preserving
Layers [0.751016548830037]
We demonstrate a novel neural network architecture which can incorporate such layers and also can be trained efficiently.
We empirically show that our designed network architecture is more robust against state-of-art gradient descent based attacks.
arXiv Detail & Related papers (2021-01-28T01:26:35Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z) - Resolution Adaptive Networks for Efficient Inference [53.04907454606711]
We propose a novel Resolution Adaptive Network (RANet), which is inspired by the intuition that low-resolution representations are sufficient for classifying "easy" inputs.
In RANet, the input images are first routed to a lightweight sub-network that efficiently extracts low-resolution representations.
High-resolution paths in the network maintain the capability to recognize the "hard" samples.
arXiv Detail & Related papers (2020-03-16T16:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.