Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks
- URL: http://arxiv.org/abs/2402.10983v1
- Date: Fri, 16 Feb 2024 02:11:27 GMT
- Title: Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks
- Authors: Jun-Jie Zhang, Deyu Meng
- Abstract summary: Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks.
A mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity.
- Score: 54.565579874913816
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Neural networks demonstrate inherent vulnerability to small, non-random
perturbations, emerging as adversarial attacks. Such attacks, born from the
gradient of the loss function relative to the input, are discerned as input
conjugates, revealing a systemic fragility within the network structure.
Intriguingly, a mathematical congruence manifests between this mechanism and
the quantum physics' uncertainty principle, casting light on a hitherto
unanticipated interdisciplinarity. This inherent susceptibility within neural
network systems is generally intrinsic, highlighting not only the innate
vulnerability of these networks but also suggesting potential advancements in
the interdisciplinary area for understanding these black-box networks.
Related papers
- Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - Predicting Instability in Complex Oscillator Networks: Limitations and
Potentials of Network Measures and Machine Learning [0.0]
We collect 46 relevant network measures and find that no small subset can reliably predict stability.
The performance of GNNs can only be matched by combining all network measures and nodewise machine learning.
This suggests that correlations of network measures and function may be misleading, and that GNNs capture the causal relationship between structure and stability substantially better.
arXiv Detail & Related papers (2024-02-27T13:34:08Z) - Certified Invertibility in Neural Networks via Mixed-Integer Programming [16.64960701212292]
Neural networks are known to be vulnerable to adversarial attacks.
There may exist large, meaningful perturbations that do not affect the network's decision.
We discuss how our findings can be useful for invertibility certification in transformations between neural networks.
arXiv Detail & Related papers (2023-01-27T15:40:38Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - On the uncertainty principle of neural networks [4.014046905033123]
We show that the accuracy-robustness trade-off is an intrinsic property whose underlying mechanism is deeply related to the uncertainty principle in quantum mechanics.
We find that for a neural network to be both accurate and robust, it needs to resolve the features of the two parts $x$ (the inputs) and $Delta$ (the derivatives of the normalized loss function $J$ with respect to $x$)
arXiv Detail & Related papers (2022-05-03T13:48:12Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - A neural network model of perception and reasoning [0.0]
We show that a simple set of biologically consistent organizing principles confer these capabilities to neuronal networks.
We implement these principles in a novel machine learning algorithm, based on concept construction instead of optimization, to design deep neural networks that reason with explainable neuron activity.
arXiv Detail & Related papers (2020-02-26T06:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.