Correlation Analysis between the Robustness of Sparse Neural Networks
and their Random Hidden Structural Priors
- URL: http://arxiv.org/abs/2107.06158v1
- Date: Tue, 13 Jul 2021 15:13:39 GMT
- Title: Correlation Analysis between the Robustness of Sparse Neural Networks
and their Random Hidden Structural Priors
- Authors: M. Ben Amor, J. Stier, M. Granitzer
- Abstract summary: We aim to investigate any existing correlations between graph theoretic properties and the robustness of Sparse Neural Networks.
Our hypothesis is, that graph theoretic properties as a prior of neural network structures are related to their robustness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models have been shown to be vulnerable to adversarial attacks.
This perception led to analyzing deep learning models not only from the
perspective of their performance measures but also their robustness to certain
types of adversarial attacks. We take another step forward in relating the
architectural structure of neural networks from a graph theoretic perspective
to their robustness. We aim to investigate any existing correlations between
graph theoretic properties and the robustness of Sparse Neural Networks. Our
hypothesis is, that graph theoretic properties as a prior of neural network
structures are related to their robustness. To answer to this hypothesis, we
designed an empirical study with neural network models obtained through random
graphs used as sparse structural priors for the networks. We additionally
investigated the evaluation of a randomly pruned fully connected network as a
point of reference.
We found that robustness measures are independent of initialization methods
but show weak correlations with graph properties: higher graph densities
correlate with lower robustness, but higher average path lengths and average
node eccentricities show negative correlations with robustness measures. We
hope to motivate further empirical and analytical research to tightening an
answer to our hypothesis.
Related papers
- Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis [25.993502776271022]
Having a large parameter space is considered one of the main suspects of the neural networks' vulnerability to adversarial example.
Previous research has demonstrated that depending on the considered model, the algorithm employed to generate adversarial examples may not function properly.
arXiv Detail & Related papers (2024-06-14T14:47:06Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - On the Robustness of Neural Collapse and the Neural Collapse of
Robustness [6.80303951699936]
Neural Collapse refers to the curious phenomenon in the end of training of a neural network, where feature vectors and classification weights converge to a very simple geometrical arrangement (a simplex)
We study the stability properties of these simplices, and find that the simplex structure disappears under small adversarial attacks.
We identify novel properties of both robust and non-robust machine learning models, and show that earlier, unlike later layers maintain reliable simplices on perturbed data.
arXiv Detail & Related papers (2023-11-13T16:18:58Z) - Decorrelating neurons using persistence [29.25969187808722]
We present two regularisation terms computed from the weights of a minimum spanning tree of a clique.
We demonstrate that naive minimisation of all correlations between neurons obtains lower accuracies than our regularisation terms.
We include a proof of differentiability of our regularisers, thus developing the first effective topological persistence-based regularisation terms.
arXiv Detail & Related papers (2023-08-09T11:09:14Z) - What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness? [0.0]
We study adversarial examples of trained neural networks through analytical tools afforded by recent theory advances connecting neural networks and kernel methods.
We show how NTKs allow to generate adversarial examples in a training-free'' fashion, and demonstrate that they transfer to fool their finite-width neural net counterparts in the lazy'' regime.
arXiv Detail & Related papers (2022-10-11T16:11:48Z) - Pruning in the Face of Adversaries [0.0]
We evaluate the impact of neural network pruning on the adversarial robustness against L-0, L-2 and L-infinity attacks.
Our results confirm that neural network pruning and adversarial robustness are not mutually exclusive.
We extend our analysis to situations that incorporate additional assumptions on the adversarial scenario and show that depending on the situation, different strategies are optimal.
arXiv Detail & Related papers (2021-08-19T09:06:16Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Understanding Generalization in Deep Learning via Tensor Methods [53.808840694241]
We advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective.
We propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks.
arXiv Detail & Related papers (2020-01-14T22:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.