Title: {\epsilon}-weakened Robustness of Deep Neural Networks
Authors: Pei Huang, Yuting Yang, Minghao Liu, Fuqi Jia, Feifei Ma and Jian
Zhang
Abstract summary: This paper introduces a notation of $varepsilon$-weakened robustness for analyzing the reliability and stability of deep neural networks (DNNs)
We prove that the $varepsilon$-weakened robustness decision problem is PP-complete and give a statistical decision algorithm with user-controllable error bound.
We also show its potential application in analyzing quality issues.
Abstract: This paper introduces a notation of $\varepsilon$-weakened robustness for
analyzing the reliability and stability of deep neural networks (DNNs). Unlike
the conventional robustness, which focuses on the "perfect" safe region in the
absence of adversarial examples, $\varepsilon$-weakened robustness focuses on
the region where the proportion of adversarial examples is bounded by
user-specified $\varepsilon$. Smaller $\varepsilon$ means a smaller chance of
failure. Under such robustness definition, we can give conclusive results for
the regions where conventional robustness ignores. We prove that the
$\varepsilon$-weakened robustness decision problem is PP-complete and give a
statistical decision algorithm with user-controllable error bound. Furthermore,
we derive an algorithm to find the maximum $\varepsilon$-weakened robustness
radius. The time complexity of our algorithms is polynomial in the dimension
and size of the network. So, they are scalable to large real-world networks.
Besides, We also show its potential application in analyzing quality issues.
Related papers
Almost Minimax Optimal Best Arm Identification in Piecewise Stationary Linear Bandits [55.957560311008926] We propose a piecewise stationary linear bandit (PSLB) model where the quality of an arm is measured by its return averaged over all contexts.
PS$varepsilon$BAI$+$ is guaranteed to identify an $varepsilon$-optimal arm with probability $ge 1-delta$ and with a minimal number of samples. arXivDetail & Related papers (2024-10-10T06:15:42Z)
Bayesian Inference with Deep Weakly Nonlinear Networks [57.95116787699412] We show at a physics level of rigor that Bayesian inference with a fully connected neural network is solvable.
We provide techniques to compute the model evidence and posterior to arbitrary order in $1/N$ and at arbitrary temperature. arXivDetail & Related papers (2024-05-26T17:08:04Z)
Robust Graph Neural Networks via Unbiased Aggregation [18.681451049083407] adversarial robustness of Graph Neural Networks (GNNs) has been questioned due to the false sense of security uncovered by strong adaptive attacks.
We provide a unified robust estimation point of view to understand their robustness and limitations. arXivDetail & Related papers (2023-11-25T05:34:36Z)
Sample Complexity of Neural Policy Mirror Descent for Policy
Optimization on Low-Dimensional Manifolds [75.51968172401394] We study the sample complexity of the neural policy mirror descent (NPMD) algorithm with deep convolutional neural networks (CNN)
In each iteration of NPMD, both the value function and the policy can be well approximated by CNNs.
We show that NPMD can leverage the low-dimensional structure of state space to escape from the curse of dimensionality. arXivDetail & Related papers (2023-09-25T07:31:22Z)
Deep neural network expressivity for optimal stopping problems [2.741266294612776] An optimal stopping problem can be approximated with error at most $varepsilon$ by a deep ReLU neural network of size at most $kappa dmathfrakq varepsilon-mathfrakr$.
This proves that deep neural networks do not suffer from the curse of dimensionality when employed to solve optimal stopping problems. arXivDetail & Related papers (2022-10-19T10:22:35Z)
Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118] This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature. arXivDetail & Related papers (2022-08-08T03:13:24Z)
Sample Complexity of Nonparametric Off-Policy Evaluation on
Low-Dimensional Manifolds using Deep Networks [71.95722100511627] We consider the off-policy evaluation problem of reinforcement learning using deep neural networks.
We show that, by choosing network size appropriately, one can leverage the low-dimensional manifold structure in the Markov decision process. arXivDetail & Related papers (2022-06-06T20:25:20Z)
Certifiably Robust Interpretation via Renyi Differential Privacy [77.04377192920741] We study the problem of interpretation robustness from a new perspective of Renyi differential privacy (RDP)
First, it can offer provable and certifiable top-$k$ robustness.
Second, our proposed method offers $sim10%$ better experimental robustness than existing approaches.
Third, our method can provide a smooth tradeoff between robustness and computational efficiency. arXivDetail & Related papers (2021-07-04T06:58:01Z)
On the stability of deep convolutional neural networks under irregular
or random deformations [0.0] robustness under location deformations for deep convolutional neural networks (DCNNs) is of great theoretical and practical interest.
Here we address this issue for any field $tauin Linfty(mathbbRd;mathbbRd)$, without any additional regularity assumption.
We prove that for signals in multiresolution approximation spaces $U_s$ at scale $s$, stability in $|tau|_Linfty/sll 1$ holds in the regime $|tau|_ arXivDetail & Related papers (2021-04-24T16:16:30Z)
Towards Deep Learning Models Resistant to Large Perturbations [0.0] Adversarial robustness has proven to be a required property of machine learning algorithms.
We show that the well-established algorithm called "adversarial training" fails to train a deep neural network given a large, but reasonable, perturbation magnitude. arXivDetail & Related papers (2020-03-30T12:03:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.