Adversarial Machine Learning in Latent Representations of Neural
Networks
- URL: http://arxiv.org/abs/2309.17401v3
- Date: Tue, 30 Jan 2024 16:24:01 GMT
- Title: Adversarial Machine Learning in Latent Representations of Neural
Networks
- Authors: Milin Zhang, Mohammad Abdi and Francesco Restuccia
- Abstract summary: Distributed deep neural networks (DNNs) have been shown to reduce the computational burden of mobile devices and decrease the end-to-end inference latency in edge computing scenarios.
This paper rigorously analyzes the robustness of distributed DNNs against adversarial action.
- Score: 9.372908891132772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed deep neural networks (DNNs) have been shown to reduce the
computational burden of mobile devices and decrease the end-to-end inference
latency in edge computing scenarios. While distributed DNNs have been studied,
to the best of our knowledge the resilience of distributed DNNs to adversarial
action still remains an open problem. In this paper, we fill the existing
research gap by rigorously analyzing the robustness of distributed DNNs against
adversarial action. We cast this problem in the context of information theory
and introduce two new measurements for distortion and robustness. Our
theoretical findings indicate that (i) assuming the same level of information
distortion, latent features are always more robust than input representations;
(ii) the adversarial robustness is jointly determined by the feature dimension
and the generalization capability of the DNN. To test our theoretical findings,
we perform extensive experimental analysis by considering 6 different DNN
architectures, 6 different approaches for distributed DNN and 10 different
adversarial attacks to the ImageNet-1K dataset. Our experimental results
support our theoretical findings by showing that the compressed latent
representations can reduce the success rate of adversarial attacks by 88% in
the best case and by 57% on the average compared to attacks to the input space.
Related papers
- Causal GNNs: A GNN-Driven Instrumental Variable Approach for Causal Inference in Networks [0.0]
CgNN is a novel approach to mitigate hidden confounder bias and improve causal effect estimation.
Our results demonstrate that CgNN effectively mitigates hidden confounder bias and offers a robust GNN-driven IV framework for causal inference in complex network data.
arXiv Detail & Related papers (2024-09-13T05:39:00Z) - Causal inference through multi-stage learning and doubly robust deep neural networks [10.021381302215062]
Deep neural networks (DNNs) have demonstrated remarkable empirical performance in large-scale supervised learning problems.
This study delves into the application of DNNs across a wide spectrum of intricate causal inference tasks.
arXiv Detail & Related papers (2024-07-11T14:47:44Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Not So Robust After All: Evaluating the Robustness of Deep Neural
Networks to Unseen Adversarial Attacks [5.024667090792856]
Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction.
A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks.
This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2023-08-12T05:21:34Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - On the Relationship Between Adversarial Robustness and Decision Region
in Deep Neural Network [26.656444835709905]
We study the internal properties of Deep Neural Networks (DNNs) that affect model robustness under adversarial attacks.
We propose the novel concept of the Populated Region Set (PRS), where training samples are populated more frequently.
arXiv Detail & Related papers (2022-07-07T16:06:34Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Recent Advances in Understanding Adversarial Robustness of Deep Neural
Networks [15.217367754000913]
It is increasingly important to obtain models with high robustness that are resistant to adversarial examples.
We give preliminary definitions on what adversarial attacks and robustness are.
We study frequently-used benchmarks and mention theoretically-proved bounds for adversarial robustness.
arXiv Detail & Related papers (2020-11-03T07:42:53Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.