Causality-based Neural Network Repair
- URL: http://arxiv.org/abs/2204.09274v1
- Date: Wed, 20 Apr 2022 07:33:52 GMT
- Title: Causality-based Neural Network Repair
- Authors: Bing Sun, Jun Sun, Hong Long Pham, Jie Shi
- Abstract summary: We propose CARE (textbfCAusality-based textbfREpair), a causality-based neural network repair technique.
CARE is able to repair all neural networks efficiently and effectively.
- Score: 9.356001065771064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks have had discernible achievements in a wide range of
applications. The wide-spread adoption also raises the concern of their
dependability and reliability. Similar to traditional decision-making programs,
neural networks can have defects that need to be repaired. The defects may
cause unsafe behaviors, raise security concerns or unjust societal impacts. In
this work, we address the problem of repairing a neural network for desirable
properties such as fairness and the absence of backdoor. The goal is to
construct a neural network that satisfies the property by (minimally) adjusting
the given neural network's parameters (i.e., weights). Specifically, we propose
CARE (\textbf{CA}usality-based \textbf{RE}pair), a causality-based neural
network repair technique that 1) performs causality-based fault localization to
identify the `guilty' neurons and 2) optimizes the parameters of the identified
neurons to reduce the misbehavior. We have empirically evaluated CARE on
various tasks such as backdoor removal, neural network repair for fairness and
safety properties. Our experiment results show that CARE is able to repair all
neural networks efficiently and effectively. For fairness repair tasks, CARE
successfully improves fairness by $61.91\%$ on average. For backdoor removal
tasks, CARE reduces the attack success rate from over $98\%$ to less than
$1\%$. For safety property repair tasks, CARE reduces the property violation
rate to less than $1\%$. Results also show that thanks to the causality-based
fault localization, CARE's repair focuses on the misbehavior and preserves the
accuracy of the neural networks.
Related papers
- BDefects4NN: A Backdoor Defect Database for Controlled Localization Studies in Neural Networks [65.666913051617]
We introduce BDefects4NN, the first backdoor defect database for localization studies.
BDefects4NN provides labeled backdoor-defected DNNs at the neuron granularity and enables controlled localization studies of defect root causes.
We conduct experiments on evaluating six fault localization criteria and two defect repair techniques, which show limited effectiveness for backdoor defects.
arXiv Detail & Related papers (2024-12-01T09:52:48Z) - Learning to Solve Combinatorial Optimization under Positive Linear Constraints via Non-Autoregressive Neural Networks [103.78912399195005]
Combinatorial optimization (CO) is the fundamental problem at the intersection of computer science, applied mathematics, etc.
In this paper, we design a family of non-autoregressive neural networks to solve CO problems under positive linear constraints.
We validate the effectiveness of this framework in solving representative CO problems including facility location, max-set covering, and traveling salesman problem.
arXiv Detail & Related papers (2024-09-06T14:58:31Z) - CorrectNet: Robustness Enhancement of Analog In-Memory Computing for
Neural Networks by Error Suppression and Compensation [4.570841222958966]
We propose a framework to enhance the robustness of neural networks under variations and noise.
We show that inference accuracy of neural networks can be recovered from as low as 1.69% under variations and noise.
arXiv Detail & Related papers (2022-11-27T19:13:33Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Verifying Neural Networks Against Backdoor Attacks [7.5033553032683855]
We propose an approach to verify whether a given neural network is free of backdoor with a certain level of success rate.
Experiment results show that our approach effectively verifies the absence of backdoor or generates backdoor triggers.
arXiv Detail & Related papers (2022-05-14T07:25:54Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - NNrepair: Constraint-based Repair of Neural Network Classifiers [10.129874872336762]
NNrepair is a constraint-based technique for repairing neural network classifiers.
NNrepair first uses fault localization to find potentially faulty network parameters.
It then performs repair using constraint solving to apply small modifications to the parameters to remedy the defects.
arXiv Detail & Related papers (2021-03-23T13:44:01Z) - Towards Repairing Neural Networks Correctly [6.600380575920419]
We propose a runtime verification method to ensure the correctness of neural networks.
Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties.
arXiv Detail & Related papers (2020-12-03T12:31:07Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.