Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending
Against Adversarial Attacks
- URL: http://arxiv.org/abs/2110.12976v1
- Date: Mon, 25 Oct 2021 14:09:45 GMT
- Title: Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending
Against Adversarial Attacks
- Authors: Qiyu Kang, Yang Song, Qinxu Ding and Wee Peng Tay
- Abstract summary: We propose a stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks (SODEF)
We provide theoretical results that give insights into the stability of SODEF as well as the choice of regularizers to ensure its stability.
- Score: 32.88499015927756
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are well-known to be vulnerable to adversarial
attacks, where malicious human-imperceptible perturbations are included in the
input to the deep network to fool it into making a wrong classification. Recent
studies have demonstrated that neural Ordinary Differential Equations (ODEs)
are intrinsically more robust against adversarial attacks compared to vanilla
DNNs. In this work, we propose a stable neural ODE with Lyapunov-stable
equilibrium points for defending against adversarial attacks (SODEF). By
ensuring that the equilibrium points of the ODE solution used as part of SODEF
is Lyapunov-stable, the ODE solution for an input with a small perturbation
converges to the same solution as the unperturbed input. We provide theoretical
results that give insights into the stability of SODEF as well as the choice of
regularizers to ensure its stability. Our analysis suggests that our proposed
regularizers force the extracted feature points to be within a neighborhood of
the Lyapunov-stable equilibrium points of the ODE. SODEF is compatible with
many defense methods and can be applied to any neural network's final regressor
layer to enhance its stability against adversarial attacks.
Related papers
- Robust Stable Spiking Neural Networks [45.84535743722043]
Spiking neural networks (SNNs) are gaining popularity in deep learning due to their low energy budget on neuromorphic hardware.
Many studies have been conducted to defend SNNs from the threat of adversarial attacks.
This paper aims to uncover the robustness of SNN through the lens of the stability of nonlinear systems.
arXiv Detail & Related papers (2024-05-31T08:40:02Z) - Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach [27.99849885813841]
Graph neural networks (GNNs) are vulnerable to adversarial perturbations.
This paper investigates GNNs derived from diverse neural flows.
We argue that Lyapunov stability, despite its common use, does not necessarily ensure adversarial robustness.
arXiv Detail & Related papers (2023-10-10T07:59:23Z) - Ortho-ODE: Enhancing Robustness and of Neural ODEs against Adversarial
Attacks [0.0]
We show that by controlling the Lipschitz constant of the ODE dynamics the robustness can be significantly improved.
We corroborate the enhanced robustness on numerous datasets.
arXiv Detail & Related papers (2023-05-16T05:37:06Z) - Lyapunov-Stable Deep Equilibrium Models [47.62037001903746]
We propose a robust DEQ model with guaranteed provable stability via Lyapunov theory.
We evaluate LyaDEQ models under well-known adversarial attacks.
We show that the LyaDEQ model can be combined with other defense methods, such as adversarial training, to achieve even better robustness.
arXiv Detail & Related papers (2023-04-25T10:36:15Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - LyaNet: A Lyapunov Framework for Training Neural ODEs [59.73633363494646]
We propose a method for training ordinary differential equations by using a control-theoretic Lyapunov condition for stability.
Our approach, called LyaNet, is based on a novel Lyapunov loss formulation that encourages the inference dynamics to converge quickly to the correct prediction.
arXiv Detail & Related papers (2022-02-05T10:13:14Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated
Gradients [30.560531008995806]
We introduce a provably stable architecture for Neural Ordinary Differential Equations (ODEs) which achieves non-trivial adversarial robustness under white-box attacks.
Inspired by dynamical system theory, we design a neural stabilized ODE network named SONet whose ODE blocks are skew-symmetric and proved to be input-output stable.
With natural training, SONet can achieve comparable robustness with the state-of-the-art adversarial defense methods, without sacrificing natural accuracy.
arXiv Detail & Related papers (2020-09-28T08:51:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.