Encoding Power Traces as Images for Efficient Side-Channel Analysis
- URL: http://arxiv.org/abs/2004.11015v2
- Date: Mon, 18 May 2020 08:53:52 GMT
- Title: Encoding Power Traces as Images for Efficient Side-Channel Analysis
- Authors: Benjamin Hettwer (1 and 2), Tobias Horn (3), Stefan Gehrer (4) and Tim
G\"uneysu (2) ((1) Robert Bosch GmbH, Corporate Sector Research, Renningen,
Germany, (2) Horst G\"ortz Institute for IT-Security, Ruhr University Bochum,
Germany,(3) Esslingen University of Applied Sciences, Esslingen, Germany, (4)
Robert Bosch LLC, Corporate Sector Research, Pittsburgh, USA)
- Abstract summary: Side-Channel Attacks (SCAs) are a powerful method to attack implementations of cryptographic algorithms.
Deep Learning (DL) methods have been introduced to simplify SCAs and simultaneously lowering the amount of required side-channel traces for a successful attack.
We present a novel technique to interpret 1D traces as 2D images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Side-Channel Attacks (SCAs) are a powerful method to attack implementations
of cryptographic algorithms. State-of-the-art techniques such as template
attacks and stochastic models usually require a lot of manual preprocessing and
feature extraction by the attacker. Deep Learning (DL) methods have been
introduced to simplify SCAs and simultaneously lowering the amount of required
side-channel traces for a successful attack. However, the general success of DL
is largely driven by their capability to classify images, a field in which they
easily outperform humans. In this paper, we present a novel technique to
interpret 1D traces as 2D images. We show and compare several techniques to
transform power traces into images, and apply these on different
implementations of the Advanced Encryption Standard (AES). By allowing the
neural network to interpret the trace as an image, we are able to significantly
reduce the number of required attack traces for a correct key guess.We also
demonstrate that the attack efficiency can be improved by using multiple 2D
images in the depth channel as an input. Furthermore, by applying image-based
data augmentation, we show how the number of profiling traces is reduced by a
factor of 50 while simultaneously enhancing the attack performance. This is a
crucial improvement, as the amount of traces that can be recorded by an
attacker is often very limited in real-life applications.
Related papers
- AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization [13.045125782574306]
This paper presents a novel adversarial attack strategy, AICAttack, designed to attack image captioning models through subtle perturbations on images.
operating within a black-box attack scenario, our algorithm requires no access to the target model's architecture, parameters, or gradient information.
We demonstrate AICAttack's effectiveness through extensive experiments on benchmark datasets against multiple victim models.
arXiv Detail & Related papers (2024-02-19T08:27:23Z) - Backdoor Attacks Against Deep Image Compression via Adaptive Frequency
Trigger [106.10954454667757]
We present a novel backdoor attack with multiple triggers against learned image compression models.
Motivated by the widely used discrete cosine transform (DCT) in existing compression systems and standards, we propose a frequency-based trigger injection model.
arXiv Detail & Related papers (2023-02-28T15:39:31Z) - Two-branch Multi-scale Deep Neural Network for Generalized Document
Recapture Attack Detection [25.88454144842164]
The image recapture attack is an effective image manipulation method to erase certain forensic traces, and when targeting on personal document images, it poses a great threat to the security of e-commerce and other web applications.
We propose a novel two-branch deep neural network by mining better generalized recapture artifacts with a designed frequency filter bank and multi-scale cross-attention fusion module.
arXiv Detail & Related papers (2022-11-30T06:57:11Z) - A Black-Box Attack on Optical Character Recognition Systems [0.0]
Adversarial machine learning is an emerging area showing the vulnerability of deep learning models.
In this paper, we propose a simple yet efficient attack method, Efficient Combinatorial Black-box Adversarial Attack, on binary image classifiers.
We validate the efficiency of the attack technique on two different data sets and three classification networks, demonstrating its performance.
arXiv Detail & Related papers (2022-08-30T14:36:27Z) - Masked Autoencoders are Robust Data Augmentors [90.34825840657774]
Regularization techniques like image augmentation are necessary for deep neural networks to generalize well.
We propose a novel perspective of augmentation to regularize the training process.
We show that utilizing such model-based nonlinear transformation as data augmentation can improve high-level recognition tasks.
arXiv Detail & Related papers (2022-06-10T02:41:48Z) - An Eye for an Eye: Defending against Gradient-based Attacks with
Gradients [24.845539113785552]
gradient-based adversarial attacks have demonstrated high success rates.
We show that the gradients can also be exploited as a powerful weapon to defend against adversarial attacks.
By using both gradient maps and adversarial images as inputs, we propose a Two-stream Restoration Network (TRN) to restore the adversarial images.
arXiv Detail & Related papers (2022-02-02T16:22:28Z) - Masked Autoencoders Are Scalable Vision Learners [60.97703494764904]
Masked autoencoders (MAE) are scalable self-supervised learners for computer vision.
Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.
Coupling these two designs enables us to train large models efficiently and effectively.
arXiv Detail & Related papers (2021-11-11T18:46:40Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z) - Generating Image Adversarial Examples by Embedding Digital Watermarks [38.93689142953098]
We propose a novel digital watermark-based method to generate image adversarial examples to fool deep neural network (DNN) models.
We devise an efficient mechanism to select host images and watermark images and utilize the improved discrete wavelet transform (DWT) based watermarking algorithm.
Our scheme is able to generate a large number of adversarial examples efficiently, concretely, an average of 1.17 seconds for completing the attacks on each image on the CIFAR-10 dataset.
arXiv Detail & Related papers (2020-08-14T09:03:26Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.