A Review and Comparison of AI Enhanced Side Channel Analysis
- URL: http://arxiv.org/abs/2402.02299v1
- Date: Sat, 3 Feb 2024 23:33:24 GMT
- Title: A Review and Comparison of AI Enhanced Side Channel Analysis
- Authors: Max Panoff, Honggang Yu, Haoqi Shan, Yier Jin
- Abstract summary: Side Channel Analysis (SCA) presents a clear threat to privacy and security in modern computing systems.
We will examine the latest state-of-the-art deep learning techniques for side channel analysis, the theory behind them, and how they are conducted.
- Score: 10.012903753622284
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Side Channel Analysis (SCA) presents a clear threat to privacy and security
in modern computing systems. The vast majority of communications are secured
through cryptographic algorithms. These algorithms are often provably-secure
from a cryptographical perspective, but their implementation on real hardware
introduces vulnerabilities. Adversaries can exploit these vulnerabilities to
conduct SCA and recover confidential information, such as secret keys or
internal states. The threat of SCA has greatly increased as machine learning,
and in particular deep learning, enhanced attacks become more common. In this
work, we will examine the latest state-of-the-art deep learning techniques for
side channel analysis, the theory behind them, and how they are conducted. Our
focus will be on profiling attacks using deep learning techniques, but we will
also examine some new and emerging methodologies enhanced by deep learning
techniques, such as non-profiled attacks, artificial trace generation, and
others. Finally, different deep learning enhanced SCA schemes attempted against
the ANSSI SCA Database (ASCAD) and their relative performance will be evaluated
and compared. This will lead to new research directions to secure cryptographic
implementations against the latest SCA attacks.
Related papers
- Topological safeguard for evasion attack interpreting the neural
networks' behavior [0.0]
In this work, a novel detector of evasion attacks is developed.
It focuses on the information of the activations of the neurons given by the model when an input sample is injected.
For this purpose, a huge data preprocessing is required to introduce all this information in the detector.
arXiv Detail & Related papers (2024-02-12T08:39:40Z) - Few-shot Weakly-supervised Cybersecurity Anomaly Detection [1.179179628317559]
We propose an enhancement to an existing few-shot weakly-supervised deep learning anomaly detection framework.
This framework incorporates data augmentation, representation learning and ordinal regression.
We then evaluated and showed the performance of our implemented framework on three benchmark datasets.
arXiv Detail & Related papers (2023-04-15T04:37:54Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Intrusion Detection Systems Using Support Vector Machines on the
KDDCUP'99 and NSL-KDD Datasets: A Comprehensive Survey [6.847009696437944]
We focus on studies that have been evaluated on the two most widely used datasets in cybersecurity namely: the KDDCUP'99 and the NSL-KDD datasets.
We provide a summary of each method, identifying the role of the SVMs, and all other algorithms involved in the studies.
arXiv Detail & Related papers (2022-09-12T20:02:12Z) - Automatic Mapping of Unstructured Cyber Threat Intelligence: An
Experimental Study [1.1470070927586016]
We present an experimental study on the automatic classification of unstructured Cyber Threat Intelligence (CTI) into attack techniques using machine learning (ML)
We contribute with two new datasets for CTI analysis, and we evaluate several ML models, including both traditional and deep learning-based ones.
We present several lessons learned about how ML can perform at this task, which classifiers perform best and under which conditions, which are the main causes of classification errors, and the challenges ahead for CTI analysis.
arXiv Detail & Related papers (2022-08-25T15:01:42Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - Deep Reinforcement Learning for Cybersecurity Threat Detection and
Protection: A Review [1.933681537640272]
Deep and machine learning-based solutions have been used in threat detection and protection.
Deep Reinforcement Learning has shown great promise in developing AI-based solutions for areas that had earlier required advanced human cognizance.
Unlike supervised machines and deep learning, deep reinforcement learning is used in more diverse ways and is empowering many innovative applications in the threat defense landscape.
arXiv Detail & Related papers (2022-06-06T16:42:00Z) - Where Did You Learn That From? Surprising Effectiveness of Membership
Inference Attacks Against Temporally Correlated Data in Deep Reinforcement
Learning [114.9857000195174]
A major challenge to widespread industrial adoption of deep reinforcement learning is the potential vulnerability to privacy breaches.
We propose an adversarial attack framework tailored for testing the vulnerability of deep reinforcement learning algorithms to membership inference attacks.
arXiv Detail & Related papers (2021-09-08T23:44:57Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.