Self-Healing Robust Neural Networks via Closed-Loop Control
- URL: http://arxiv.org/abs/2206.12963v1
- Date: Sun, 26 Jun 2022 20:25:35 GMT
- Title: Self-Healing Robust Neural Networks via Closed-Loop Control
- Authors: Zhuotong Chen, Qianxiao Li and Zheng Zhang
- Abstract summary: A typical self-healing mechanism is the immune system of a human body.
This paper considers the post-training self-healing of a neural network.
We propose a closed-loop control formulation to automatically detect and fix the errors caused by various attacks or perturbations.
- Score: 23.360913637445964
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the wide applications of neural networks, there have been increasing
concerns about their vulnerability issue. While numerous attack and defense
techniques have been developed, this work investigates the robustness issue
from a new angle: can we design a self-healing neural network that can
automatically detect and fix the vulnerability issue by itself? A typical
self-healing mechanism is the immune system of a human body. This
biology-inspired idea has been used in many engineering designs but is rarely
investigated in deep learning. This paper considers the post-training
self-healing of a neural network, and proposes a closed-loop control
formulation to automatically detect and fix the errors caused by various
attacks or perturbations. We provide a margin-based analysis to explain how
this formulation can improve the robustness of a classifier. To speed up the
inference of the proposed self-healing network, we solve the control problem
via improving the Pontryagin Maximum Principle-based solver. Lastly, we present
an error estimation of the proposed framework for neural networks with
nonlinear activation functions. We validate the performance on several network
architectures against various perturbations. Since the self-healing method does
not need a-priori information about data perturbations/attacks, it can handle a
broad class of unforeseen perturbations.
Related papers
- Message Passing Variational Autoregressive Network for Solving Intractable Ising Models [6.261096199903392]
Many deep neural networks have been used to solve Ising models, including autoregressive neural networks, convolutional neural networks, recurrent neural networks, and graph neural networks.
Here we propose a variational autoregressive architecture with a message passing mechanism, which can effectively utilize the interactions between spin variables.
The new network trained under an annealing framework outperforms existing methods in solving several prototypical Ising spin Hamiltonians, especially for larger spin systems at low temperatures.
arXiv Detail & Related papers (2024-04-09T11:27:07Z) - Rational Neural Network Controllers [0.0]
Recent work has demonstrated the effectiveness of neural networks in control systems (known as neural feedback loops)
One of the big challenges of this approach is that neural networks have been shown to be sensitive to adversarial attacks.
This paper considers rational neural networks and presents novel rational activation functions, which can be used effectively in robustness problems for neural feedback loops.
arXiv Detail & Related papers (2023-07-12T16:35:41Z) - Semantic-Based Neural Network Repair [4.092001692194709]
We propose an approach to automatically repair erroneous neural networks.
Our approach is based on an executable semantics of deep learning layers.
We evaluate our approach for two usage scenarios, i.e., repairing automatically generated neural networks and manually written ones suffering from common model bugs.
arXiv Detail & Related papers (2023-06-12T16:18:32Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - On the Adversarial Robustness of Quantized Neural Networks [2.0625936401496237]
It is unclear how model compression techniques may affect the robustness of AI algorithms against adversarial attacks.
This paper explores the effect of quantization, one of the most common compression techniques, on the adversarial robustness of neural networks.
arXiv Detail & Related papers (2021-05-01T11:46:35Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Towards Robust Neural Networks via Close-loop Control [12.71446168207573]
Deep neural networks are vulnerable to various perturbations due to their black-box nature.
Recent study has shown that a deep neural network can misclassify the data even if the input data is perturbed by an imperceptible amount.
arXiv Detail & Related papers (2021-02-03T03:50:35Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.