NNrepair: Constraint-based Repair of Neural Network Classifiers
- URL: http://arxiv.org/abs/2103.12535v1
- Date: Tue, 23 Mar 2021 13:44:01 GMT
- Title: NNrepair: Constraint-based Repair of Neural Network Classifiers
- Authors: Muhammad Usman, Divya Gopinath, Youcheng Sun, Yannic Noller and Corina
Pasareanu
- Abstract summary: NNrepair is a constraint-based technique for repairing neural network classifiers.
NNrepair first uses fault localization to find potentially faulty network parameters.
It then performs repair using constraint solving to apply small modifications to the parameters to remedy the defects.
- Score: 10.129874872336762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present NNrepair, a constraint-based technique for repairing neural
network classifiers. The technique aims to fix the logic of the network at an
intermediate layer or at the last layer. NNrepair first uses fault localization
to find potentially faulty network parameters (such as the weights) and then
performs repair using constraint solving to apply small modifications to the
parameters to remedy the defects. We present novel strategies to enable precise
yet efficient repair such as inferring correctness specifications to act as
oracles for intermediate layer repair, and generation of experts for each
class. We demonstrate the technique in the context of three different
scenarios: (1) Improving the overall accuracy of a model, (2) Fixing security
vulnerabilities caused by poisoning of training data and (3) Improving the
robustness of the network against adversarial attacks. Our evaluation on MNIST
and CIFAR-10 models shows that NNrepair can improve the accuracy by 45.56
percentage points on poisoned data and 10.40 percentage points on adversarial
data. NNrepair also provides small improvement in the overall accuracy of
models, without requiring new data or re-training.
Related papers
- Cost-Effective Fault Tolerance for CNNs Using Parameter Vulnerability Based Hardening and Pruning [0.4660328753262075]
This paper introduces a model-level hardening approach for CNNs by integrating error correction directly into the neural networks.
The proposed method demonstrates fault resilience nearly equivalent to TMR-based correction but with significantly reduced overhead.
Remarkably, the hardened pruned CNNs perform up to 24% faster than the hardened un-pruned ones.
arXiv Detail & Related papers (2024-05-17T09:42:44Z) - Patch Synthesis for Property Repair of Deep Neural Networks [15.580097790702508]
We introduce PatchPro, a novel patch-based approach for property-level repair of deep neural networks (DNNs)
PatchPro provides specialized repairs for all samples within the robustness neighborhood while maintaining the network's original performance.
Our method incorporates formal verification and a mechanism for allocating patch modules, enabling it to defend against adversarial attacks.
arXiv Detail & Related papers (2024-04-02T05:16:59Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Enriching Neural Network Training Dataset to Improve Worst-Case
Performance Guarantees [0.0]
We show that adapting the NN training dataset during training can improve the NN performance and substantially reduce its worst-case violations.
This paper proposes an algorithm that identifies and enriches the training dataset with critical datapoints that reduce the worst-case violations and deliver a neural network with improved worst-case performance guarantees.
arXiv Detail & Related papers (2023-03-23T12:59:37Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Causality-based Neural Network Repair [9.356001065771064]
We propose CARE (textbfCAusality-based textbfREpair), a causality-based neural network repair technique.
CARE is able to repair all neural networks efficiently and effectively.
arXiv Detail & Related papers (2022-04-20T07:33:52Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Multiplicative Reweighting for Robust Neural Network Optimization [51.67267839555836]
Multiplicative weight (MW) updates are robust to moderate data corruptions in expert advice.
We show that MW improves the accuracy of neural networks in the presence of label noise.
arXiv Detail & Related papers (2021-02-24T10:40:25Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.