SCNet: A Neural Network for Automated Side-Channel Attack
- URL: http://arxiv.org/abs/2008.00476v1
- Date: Sun, 2 Aug 2020 13:14:12 GMT
- Title: SCNet: A Neural Network for Automated Side-Channel Attack
- Authors: Guanlin Li, Chang Liu, Han Yu, Yanhong Fan, Libang Zhang, Zongyue
Wang, Meiqin Wang
- Abstract summary: We propose SCNet, which automatically performs side-channel attacks.
We also design this network combining with side-channel domain knowledge and different deep learning model to improve the performance.
The proposed model is a useful tool for automatically testing the robustness of computer systems.
- Score: 13.0547560056431
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The side-channel attack is an attack method based on the information gained
about implementations of computer systems, rather than weaknesses in
algorithms. Information about system characteristics such as power consumption,
electromagnetic leaks and sound can be exploited by the side-channel attack to
compromise the system. Much research effort has been directed towards this
field. However, such an attack still requires strong skills, thus can only be
performed effectively by experts. Here, we propose SCNet, which automatically
performs side-channel attacks. And we also design this network combining with
side-channel domain knowledge and different deep learning model to improve the
performance and better to explain the result. The results show that our model
achieves good performance with fewer parameters. The proposed model is a useful
tool for automatically testing the robustness of computer systems.
Related papers
- Multi-agent Reinforcement Learning-based Network Intrusion Detection System [3.4636217357968904]
Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
arXiv Detail & Related papers (2024-07-08T09:18:59Z) - When Side-Channel Attacks Break the Black-Box Property of Embedded
Artificial Intelligence [0.8192907805418583]
deep neural networks (DNNs) are subject to malicious examples designed in a way to fool the network while being undetectable to the human observer.
We propose an architecture-agnostic attack which solve this constraint by extracting the logits.
Our method combines hardware and software attacks, by performing a side-channel attack that exploits electromagnetic leakages.
arXiv Detail & Related papers (2023-11-23T13:41:22Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - An integrated Auto Encoder-Block Switching defense approach to prevent
adversarial attacks [0.0]
The vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically.
This article proposes a defense algorithm that utilizes the combination of an auto-encoder and block-switching architecture.
arXiv Detail & Related papers (2022-03-11T10:58:24Z) - Channel-wise Gated Res2Net: Towards Robust Detection of Synthetic Speech
Attacks [67.7648985513978]
Existing approaches for anti-spoofing in automatic speaker verification (ASV) still lack generalizability to unseen attacks.
We present a novel, channel-wise gated Res2Net (CG-Res2Net), which modifies Res2Net to enable a channel-wise gating mechanism.
arXiv Detail & Related papers (2021-07-19T12:27:40Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Leaky Nets: Recovering Embedded Neural Network Models and Inputs through
Simple Power and Timing Side-Channels -- Attacks and Defenses [4.014351341279427]
We study the side-channel vulnerabilities of embedded neural network implementations by recovering their parameters.
We demonstrate our attacks on popular micro-controller platforms over networks of different precisions.
Countermeasures against timing-based attacks are implemented and their overheads are analyzed.
arXiv Detail & Related papers (2021-03-26T21:28:13Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.