Architecture-Preserving Provable Repair of Deep Neural Networks
- URL: http://arxiv.org/abs/2304.03496v2
- Date: Wed, 16 Aug 2023 09:05:42 GMT
- Title: Architecture-Preserving Provable Repair of Deep Neural Networks
- Authors: Zhe Tao, Stephanie Nawas, Jacqueline Mitchell, Aditya V. Thakur
- Abstract summary: Deep neural networks (DNNs) are becoming increasingly important components of software, and are considered the state-of-the-art solution for a number of problems.
This paper addresses the problem of architecture-preserving V-polytope provable repair of DNNs.
- Score: 2.4687962186994663
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep neural networks (DNNs) are becoming increasingly important components of
software, and are considered the state-of-the-art solution for a number of
problems, such as image recognition. However, DNNs are far from infallible, and
incorrect behavior of DNNs can have disastrous real-world consequences. This
paper addresses the problem of architecture-preserving V-polytope provable
repair of DNNs. A V-polytope defines a convex bounded polytope using its vertex
representation. V-polytope provable repair guarantees that the repaired DNN
satisfies the given specification on the infinite set of points in the given
V-polytope. An architecture-preserving repair only modifies the parameters of
the DNN, without modifying its architecture. The repair has the flexibility to
modify multiple layers of the DNN, and runs in polynomial time. It supports
DNNs with activation functions that have some linear pieces, as well as
fully-connected, convolutional, pooling and residual layers. To the best our
knowledge, this is the first provable repair approach that has all of these
features. We implement our approach in a tool called APRNN. Using MNIST,
ImageNet, and ACAS Xu DNNs, we show that it has better efficiency, scalability,
and generalization compared to PRDNN and REASSURE, prior provable repair
methods that are not architecture preserving.
Related papers
- Repairing Deep Neural Networks Based on Behavior Imitation [5.1791561132409525]
We propose a behavior-imitation based repair framework for deep neural networks (DNNs)
BIRDNN corrects incorrect predictions of negative samples by imitating the closest expected behaviors of positive samples during the retraining repair procedure.
For the fine-tuning repair process, BIRDNN analyzes the behavior differences of neurons on positive and negative samples to identify the most responsible neurons for the erroneous behaviors.
arXiv Detail & Related papers (2023-05-05T08:33:28Z) - Incremental Satisfiability Modulo Theory for Verification of Deep Neural
Networks [22.015676101940077]
We present an incremental satisfiability modulo theory (SMT) algorithm based on the Reluplex framework.
We implement our algorithm as an incremental solver called DeepInc, and exerimental results show that DeepInc is more efficient in most cases.
arXiv Detail & Related papers (2023-02-10T04:31:28Z) - Measurement-Consistent Networks via a Deep Implicit Layer for Solving
Inverse Problems [0.0]
End-to-end deep neural networks (DNNs) have become state-of-the-art (SOTA) for solving inverse problems.
These networks are sensitive to minor variations in the training pipeline and often fail to reconstruct small but important details.
We propose a framework that transforms any DNN for inverse problems into a measurement-consistent one.
arXiv Detail & Related papers (2022-11-06T17:05:04Z) - Towards a General Purpose CNN for Long Range Dependencies in
$\mathrm{N}$D [49.57261544331683]
We propose a single CNN architecture equipped with continuous convolutional kernels for tasks on arbitrary resolution, dimensionality and length without structural changes.
We show the generality of our approach by applying the same CCNN to a wide set of tasks on sequential (1$mathrmD$) and visual data (2$mathrmD$)
Our CCNN performs competitively and often outperforms the current state-of-the-art across all tasks considered.
arXiv Detail & Related papers (2022-06-07T15:48:02Z) - ArchRepair: Block-Level Architecture-Oriented Repairing for Deep Neural
Networks [13.661704974188872]
We propose a novel repairing direction for deep neural networks (DNNs) at the block level.
We propose adversarial-aware spectrum analysis for vulnerable block localization.
We also propose the architecture-oriented search-based repairing that relaxes the targeted block to a continuous repairing search space.
arXiv Detail & Related papers (2021-11-26T06:35:15Z) - Provable Repair of Deep Neural Networks [8.55884254206878]
Deep Neural Networks (DNNs) have grown in popularity over the past decade and are now being used in safety-critical domains such as aircraft collision avoidance.
This paper tackles the problem of correcting a DNN once unsafe behavior is found.
We introduce the provable repair problem, which is the problem of repairing a network N to construct a new network N' that satisfies a given specification.
arXiv Detail & Related papers (2021-04-09T15:03:53Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their
Asymptotic Overconfidence [65.24701908364383]
A Bayesian treatment can mitigate overconfidence in ReLU nets around the training data.
But far away from them, ReLU neural networks (BNNs) can still underestimate uncertainty and thus be overconfident.
We show that it can be applied emphpost-hoc to any pre-trained ReLU BNN at a low cost.
arXiv Detail & Related papers (2020-10-06T13:32:18Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted
Regularization Method [69.49386965992464]
We propose a new block-based pruning framework that comprises a general and flexible structured pruning dimension as well as a powerful and efficient reweighted regularization method.
Our framework is universal, which can be applied to both CNNs and RNNs, implying complete support for the two major kinds ofintensive computation layers.
It is the first time that the weight pruning framework achieves universal coverage for both CNNs and RNNs with real-time mobile acceleration and no accuracy compromise.
arXiv Detail & Related papers (2020-01-23T03:30:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.