On the uncertainty principle of neural networks
- URL: http://arxiv.org/abs/2205.01493v1
- Date: Tue, 3 May 2022 13:48:12 GMT
- Title: On the uncertainty principle of neural networks
- Authors: Jun-Jie Zhang, Dong-Xiao Zhang, Jian-Nan Chen, Long-Gang Pang
- Abstract summary: We show that the accuracy-robustness trade-off is an intrinsic property whose underlying mechanism is deeply related to the uncertainty principle in quantum mechanics.
We find that for a neural network to be both accurate and robust, it needs to resolve the features of the two parts $x$ (the inputs) and $Delta$ (the derivatives of the normalized loss function $J$ with respect to $x$)
- Score: 4.014046905033123
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the successes in many fields, it is found that neural networks are
vulnerability and difficult to be both accurate and robust (robust means that
the prediction of the trained network stays unchanged for inputs with
non-random perturbations introduced by adversarial attacks). Various empirical
and analytic studies have suggested that there is more or less a trade-off
between the accuracy and robustness of neural networks. If the trade-off is
inherent, applications based on the neural networks are vulnerable with
untrustworthy predictions. It is then essential to ask whether the trade-off is
an inherent property or not. Here, we show that the accuracy-robustness
trade-off is an intrinsic property whose underlying mechanism is deeply related
to the uncertainty principle in quantum mechanics. We find that for a neural
network to be both accurate and robust, it needs to resolve the features of the
two conjugated parts $x$ (the inputs) and $\Delta$ (the derivatives of the
normalized loss function $J$ with respect to $x$), respectively. Analogous to
the position-momentum conjugation in quantum mechanics, we show that the inputs
and their conjugates cannot be resolved by a neural network simultaneously.
Related papers
- Emergent weight morphologies in deep neural networks [0.0]
We show that training deep neural networks gives rise to emergent weight morphologies independent of the training data.
Our work demonstrates emergence in the training of deep neural networks, which impacts the achievable performance of deep neural networks.
arXiv Detail & Related papers (2025-01-09T19:48:51Z) - Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks [54.565579874913816]
Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks.
A mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity.
arXiv Detail & Related papers (2024-02-16T02:11:27Z) - The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning [71.14237199051276]
We consider classical distribution-agnostic framework and algorithms minimising empirical risks.
We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks is extremely challenging.
arXiv Detail & Related papers (2023-09-13T16:33:27Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Neural Bayesian Network Understudy [13.28673601999793]
We show that a neural network can be trained to output conditional probabilities, providing approximately the same functionality as a Bayesian Network.
We propose two training strategies that allow encoding the independence relations inferred from a given causal structure into the neural network.
arXiv Detail & Related papers (2022-11-15T15:56:51Z) - What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness? [0.0]
We study adversarial examples of trained neural networks through analytical tools afforded by recent theory advances connecting neural networks and kernel methods.
We show how NTKs allow to generate adversarial examples in a training-free'' fashion, and demonstrate that they transfer to fool their finite-width neural net counterparts in the lazy'' regime.
arXiv Detail & Related papers (2022-10-11T16:11:48Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - A neural network model of perception and reasoning [0.0]
We show that a simple set of biologically consistent organizing principles confer these capabilities to neuronal networks.
We implement these principles in a novel machine learning algorithm, based on concept construction instead of optimization, to design deep neural networks that reason with explainable neuron activity.
arXiv Detail & Related papers (2020-02-26T06:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.