On the Computational Entanglement of Distant Features in Adversarial Machine Learning
- URL: http://arxiv.org/abs/2309.15669v7
- Date: Mon, 30 Sep 2024 05:58:19 GMT
- Title: On the Computational Entanglement of Distant Features in Adversarial Machine Learning
- Authors: YenLung Lai, Xingbo Dong, Zhe Jin,
- Abstract summary: We introduce the concept of "computational entanglement"
Computational entanglement enables the network to achieve zero loss by fitting random noise, even on previously unseen test samples.
We present a novel application of computational entanglement in transforming a worst-case adversarial examples-inputs that are highly non-robust.
- Score: 8.87656044562629
- License:
- Abstract: In this research, we introduce the concept of "computational entanglement," a phenomenon observed in overparameterized feedforward linear networks that enables the network to achieve zero loss by fitting random noise, even on previously unseen test samples. Analyzing this behavior through spacetime diagrams reveals its connection to length contraction, where both training and test samples converge toward a shared normalized point within a flat Riemannian manifold. Moreover, we present a novel application of computational entanglement in transforming a worst-case adversarial examples-inputs that are highly non-robust and uninterpretable to human observers-into outputs that are both recognizable and robust. This provides new insights into the behavior of non-robust features in adversarial example generation, underscoring the critical role of computational entanglement in enhancing model robustness and advancing our understanding of neural networks in adversarial contexts.
Related papers
- On the Robustness of Neural Collapse and the Neural Collapse of
Robustness [6.80303951699936]
Neural Collapse refers to the curious phenomenon in the end of training of a neural network, where feature vectors and classification weights converge to a very simple geometrical arrangement (a simplex)
We study the stability properties of these simplices, and find that the simplex structure disappears under small adversarial attacks.
We identify novel properties of both robust and non-robust machine learning models, and show that earlier, unlike later layers maintain reliable simplices on perturbed data.
arXiv Detail & Related papers (2023-11-13T16:18:58Z) - On the ISS Property of the Gradient Flow for Single Hidden-Layer Neural
Networks with Linear Activations [0.0]
We investigate the effects of overfitting on the robustness of gradient-descent training when subject to uncertainty on the gradient estimation.
We show that the general overparametrized formulation introduces a set of spurious equilibria which lay outside the set where the loss function is minimized.
arXiv Detail & Related papers (2023-05-17T02:26:34Z) - Phenomenology of Double Descent in Finite-Width Neural Networks [29.119232922018732]
Double descent delineates the behaviour of models depending on the regime they belong to.
We use influence functions to derive suitable expressions of the population loss and its lower bound.
Building on our analysis, we investigate how the loss function affects double descent.
arXiv Detail & Related papers (2022-03-14T17:39:49Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Adversarial Perturbations Are Not So Weird: Entanglement of Robust and
Non-Robust Features in Neural Network Classifiers [4.511923587827301]
We show that in a neural network trained in a standard way, non-robust features respond to small, "non-semantic" patterns.
adversarial examples can be formed via minimal perturbations to these small, entangled patterns.
arXiv Detail & Related papers (2021-02-09T20:21:31Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.