Resilience and Security of Deep Neural Networks Against Intentional and Unintentional Perturbations: Survey and Research Challenges
- URL: http://arxiv.org/abs/2408.00193v2
- Date: Sat, 3 Aug 2024 02:23:32 GMT
- Title: Resilience and Security of Deep Neural Networks Against Intentional and Unintentional Perturbations: Survey and Research Challenges
- Authors: Sazzad Sayyed, Milin Zhang, Shahriar Rifat, Ananthram Swami, Michael De Lucia, Francesco Restuccia,
- Abstract summary: In high-stakes scenarios, it is imperative that deep neural networks (DNNs) provide robust inference to external perturbations.
We fill this gap by providing a survey of the state of the art and highlighting the similarities of the proposed approaches.
- Score: 17.246403634915087
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In order to deploy deep neural networks (DNNs) in high-stakes scenarios, it is imperative that DNNs provide inference robust to external perturbations - both intentional and unintentional. Although the resilience of DNNs to intentional and unintentional perturbations has been widely investigated, a unified vision of these inherently intertwined problem domains is still missing. In this work, we fill this gap by providing a survey of the state of the art and highlighting the similarities of the proposed approaches.We also analyze the research challenges that need to be addressed to deploy resilient and secure DNNs. As there has not been any such survey connecting the resilience of DNNs to intentional and unintentional perturbations, we believe this work can help advance the frontier in both domains by enabling the exchange of ideas between the two communities.
Related papers
- Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks [3.9444202574850755]
Spiking Neural Networks (SNNs) are known for their low energy consumption and high robustness.
This paper explores the robustness performance of SNNs trained by supervised learning rules under backdoor attacks.
arXiv Detail & Related papers (2024-09-24T02:15:19Z) - Relationship between Uncertainty in DNNs and Adversarial Attacks [0.0]
Deep Neural Networks (DNNs) have achieved state of the art results and even outperformed human accuracy in many challenging tasks.
DNNs are accompanied by uncertainty about their results, causing them to predict an outcome that is either incorrect or outside of a certain level of confidence.
arXiv Detail & Related papers (2024-09-20T05:38:38Z) - Joint Universal Adversarial Perturbations with Interpretations [19.140429650679593]
In this paper, we propose a novel attacking framework to generate joint universal adversarial perturbations (JUAP)
To the best of our knowledge, this is the first effort to study UAP for jointly attacking both DNNs and interpretations.
arXiv Detail & Related papers (2024-08-03T08:58:04Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - A Survey of Graph Neural Networks in Real world: Imbalance, Noise,
Privacy and OOD Challenges [75.37448213291668]
This paper systematically reviews existing Graph Neural Networks (GNNs)
We first highlight the four key challenges faced by existing GNNs, paving the way for our exploration of real-world GNN models.
arXiv Detail & Related papers (2024-03-07T13:10:37Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - gRoMA: a Tool for Measuring the Global Robustness of Deep Neural
Networks [3.2228025627337864]
Deep neural networks (DNNs) are at the forefront of cutting-edge technology, and have been achieving remarkable performance in a variety of complex tasks.
Their integration into safety-critical systems, such as in the aerospace or automotive domains, poses a significant challenge due to the threat of adversarial inputs.
Here, we present gRoMA, an innovative and scalable tool that implements a probabilistic approach to measure the global categorial robustness of a DNN.
arXiv Detail & Related papers (2023-01-05T20:45:23Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.