NeuRecover: Regression-Controlled Repair of Deep Neural Networks with
Training History
- URL: http://arxiv.org/abs/2203.00191v1
- Date: Tue, 1 Mar 2022 02:47:12 GMT
- Title: NeuRecover: Regression-Controlled Repair of Deep Neural Networks with
Training History
- Authors: Shogo Tokui, Susumu Tokumoto, Akihito Yoshii, Fuyuki Ishikawa, Takao
Nakagawa, Kazuki Munakata, Shinji Kikuchi
- Abstract summary: Retraining to fix some behavior often has a destructive impact on other behavior.
This problem is crucial when engineers are required to investigate failures in assurance activities for safety or trust.
We propose a novel repair method that makes use of the training history for judging which DNN parameters should be changed or not to suppress regressions.
- Score: 2.7904991230380403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Systematic techniques to improve quality of deep neural networks (DNNs) are
critical given the increasing demand for practical applications including
safety-critical ones. The key challenge comes from the little controllability
in updating DNNs. Retraining to fix some behavior often has a destructive
impact on other behavior, causing regressions, i.e., the updated DNN fails with
inputs correctly handled by the original one. This problem is crucial when
engineers are required to investigate failures in intensive assurance
activities for safety or trust. Search-based repair techniques for DNNs have
potentials to tackle this challenge by enabling localized updates only on
"responsible parameters" inside the DNN. However, the potentials have not been
explored to realize sufficient controllability to suppress regressions in DNN
repair tasks. In this paper, we propose a novel DNN repair method that makes
use of the training history for judging which DNN parameters should be changed
or not to suppress regressions. We implemented the method into a tool called
NeuRecover and evaluated it with three datasets. Our method outperformed the
existing method by achieving often less than a quarter, even a tenth in some
cases, number of regressions. Our method is especially effective when the
repair requirements are tight to fix specific failure types. In such cases, our
method showed stably low rates (<2%) of regressions, which were in many cases a
tenth of regressions caused by retraining.
Related papers
- GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning [8.435319580412472]
Graph Neural Networks (GNNs) are vulnerable to adversarial attacks.
In this paper, we introduce the novel concept of model repair for GNNs.
We propose a repair framework, Repairing Robustness of Graph Neural Networks via Machine Unlearning (GraphMU)
arXiv Detail & Related papers (2024-06-19T12:41:15Z) - Mitigating Backdoors within Deep Neural Networks in Data-limited
Configuration [1.1663475941322277]
A backdoored deep neural network shows normal behavior on clean data while behaving maliciously once a trigger is injected into a sample at the test time.
In this paper, we formulate some characteristics of poisoned neurons.
This backdoor suspiciousness score can rank network neurons according to their activation values, weights, and their relationship with other neurons in the same layer.
arXiv Detail & Related papers (2023-11-13T15:54:27Z) - Reconstructive Neuron Pruning for Backdoor Defense [96.21882565556072]
We propose a novel defense called emphReconstructive Neuron Pruning (RNP) to expose and prune backdoor neurons.
In RNP, unlearning is operated at the neuron level while recovering is operated at the filter level, forming an asymmetric reconstructive learning procedure.
We show that such an asymmetric process on only a few clean samples can effectively expose and prune the backdoor neurons implanted by a wide range of attacks.
arXiv Detail & Related papers (2023-05-24T08:29:30Z) - Repairing Deep Neural Networks Based on Behavior Imitation [5.1791561132409525]
We propose a behavior-imitation based repair framework for deep neural networks (DNNs)
BIRDNN corrects incorrect predictions of negative samples by imitating the closest expected behaviors of positive samples during the retraining repair procedure.
For the fine-tuning repair process, BIRDNN analyzes the behavior differences of neurons on positive and negative samples to identify the most responsible neurons for the erroneous behaviors.
arXiv Detail & Related papers (2023-05-05T08:33:28Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [172.61581010141978]
Certifiable robustness is a desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios.
We propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity.
arXiv Detail & Related papers (2022-06-15T22:42:29Z) - Function Regression using Spiking DeepONet [2.935661780430872]
We present an SNN-based method to perform regression, which has been a challenge due to the inherent difficulty in representing a function's input domain and continuous output values as spikes.
We use a DeepONet - neural network designed to learn operators - to learn the behavior of spikes.
We propose several methods to use a DeepONet in the spiking framework, and present accuracy and training time for different benchmarks.
arXiv Detail & Related papers (2022-05-17T15:22:22Z) - FitAct: Error Resilient Deep Neural Networks via Fine-Grained
Post-Trainable Activation Functions [0.05249805590164901]
Deep neural networks (DNNs) are increasingly being deployed in safety-critical systems such as personal healthcare devices and self-driving cars.
In this paper, we propose FitAct, a low-cost approach to enhance the error resilience of DNNs by deploying fine-grained post-trainable activation functions.
arXiv Detail & Related papers (2021-12-27T07:07:50Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking [83.48804199140758]
We propose a learning-to-mis-rank formulation to perturb the ranking of the system output.
We also perform a back-box attack by developing a novel multi-stage network architecture.
Our method can control the number of malicious pixels by using differentiable multi-shot sampling.
arXiv Detail & Related papers (2020-04-08T18:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.