SHEATH: Defending Horizontal Collaboration for Distributed CNNs against Adversarial Noise
- URL: http://arxiv.org/abs/2409.17279v1
- Date: Wed, 25 Sep 2024 18:45:04 GMT
- Title: SHEATH: Defending Horizontal Collaboration for Distributed CNNs against Adversarial Noise
- Authors: Muneeba Asif, Mohammad Kumail Kazmi, Mohammad Ashiqur Rahman, Syed Rafay Hasan, Soamar Homsi,
- Abstract summary: This paper presents a novel framework for Secure Horizontal Edge with Adrial Threat Handling (SHEATH)
SHEATH aims to address vulnerabilities without requiring complete knowledge of the CNN model in HC edge architectures.
Our evaluations demonstrate SHEATH's adaptability and effectiveness across diverse CNN configurations.
- Score: 1.0415834763540737
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As edge computing and the Internet of Things (IoT) expand, horizontal collaboration (HC) emerges as a distributed data processing solution for resource-constrained devices. In particular, a convolutional neural network (CNN) model can be deployed on multiple IoT devices, allowing distributed inference execution for image recognition while ensuring model and data privacy. Yet, this distributed architecture remains vulnerable to adversaries who want to make subtle alterations that impact the model, even if they lack access to the entire model. Such vulnerabilities can have severe implications for various sectors, including healthcare, military, and autonomous systems. However, security solutions for these vulnerabilities have not been explored. This paper presents a novel framework for Secure Horizontal Edge with Adversarial Threat Handling (SHEATH) to detect adversarial noise and eliminate its effect on CNN inference by recovering the original feature maps. Specifically, SHEATH aims to address vulnerabilities without requiring complete knowledge of the CNN model in HC edge architectures based on sequential partitioning. It ensures data and model integrity, offering security against adversarial attacks in diverse HC environments. Our evaluations demonstrate SHEATH's adaptability and effectiveness across diverse CNN configurations.
Related papers
- Impact of White-Box Adversarial Attacks on Convolutional Neural Networks [0.6138671548064356]
We investigate the susceptibility of Convolutional Neural Networks (CNNs) to white-box adversarial attacks.
Our study provides insights into the robustness of CNNs against adversarial threats.
arXiv Detail & Related papers (2024-10-02T21:24:08Z) - A Novel Self-Attention-Enabled Weighted Ensemble-Based Convolutional Neural Network Framework for Distributed Denial of Service Attack Classification [0.0]
This research introduces a novel approach for DDoS attack detection.
The proposed method achieves a precision of 98.71%, an F1-score of 98.66%, a recall of 98.63%, and an accuracy of 98.69%.
arXiv Detail & Related papers (2024-09-01T18:58:33Z) - Model Agnostic Hybrid Sharding For Heterogeneous Distributed Inference [11.39873199479642]
Nesa introduces a model-agnostic sharding framework designed for decentralized AI inference.
Our framework uses blockchain-based deep neural network sharding to distribute computational tasks across a diverse network of nodes.
Our results highlight the potential to democratize access to cutting-edge AI technologies.
arXiv Detail & Related papers (2024-07-29T08:18:48Z) - Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
Detection [22.99930028876662]
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
Current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system.
We propose a simple and light-weight detector, which leverages recent findings on the relation between networks' local intrinsic dimensionality (LID) and adversarial attacks.
arXiv Detail & Related papers (2022-12-13T17:51:32Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Transferable Perturbations of Deep Feature Distributions [102.94094966908916]
This work presents a new adversarial attack based on the modeling and exploitation of class-wise and layer-wise deep feature distributions.
We achieve state-of-the-art targeted blackbox transfer-based attack results for undefended ImageNet models.
arXiv Detail & Related papers (2020-04-27T00:32:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.