Short Paper: Static and Microarchitectural ML-Based Approaches For
Detecting Spectre Vulnerabilities and Attacks
- URL: http://arxiv.org/abs/2210.14452v1
- Date: Wed, 26 Oct 2022 03:55:39 GMT
- Title: Short Paper: Static and Microarchitectural ML-Based Approaches For
Detecting Spectre Vulnerabilities and Attacks
- Authors: Chidera Biringa, Gaspard Baye and G\"okhan Kul
- Abstract summary: Spectre intrusions exploit speculative execution design vulnerabilities in modern processors.
Current state-of-the-art detection techniques utilize micro-architectural features or vulnerable speculative code to detect these threats.
We present the first comprehensive evaluation of static and microarchitectural analysis-assisted machine learning approaches to detect Spectre vulnerabilities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spectre intrusions exploit speculative execution design vulnerabilities in
modern processors. The attacks violate the principles of isolation in programs
to gain unauthorized private user information. Current state-of-the-art
detection techniques utilize micro-architectural features or vulnerable
speculative code to detect these threats. However, these techniques are
insufficient as Spectre attacks have proven to be more stealthy with recently
discovered variants that bypass current mitigation mechanisms. Side-channels
generate distinct patterns in processor cache, and sensitive information
leakage is dependent on source code vulnerable to Spectre attacks, where an
adversary uses these vulnerabilities, such as branch prediction, which causes a
data breach. Previous studies predominantly approach the detection of Spectre
attacks using the microarchitectural analysis, a reactive approach. Hence, in
this paper, we present the first comprehensive evaluation of static and
microarchitectural analysis-assisted machine learning approaches to detect
Spectre vulnerable code snippets (preventive) and Spectre attacks (reactive).
We evaluate the performance trade-offs in employing classifiers for detecting
Spectre vulnerabilities and attacks.
Related papers
- Beyond Over-Protection: A Targeted Approach to Spectre Mitigation and Performance Optimization [3.4439829486606737]
Speculative load hardening in LLVM protects against leaks by tracking the speculation state and masking values during misspeculation.
We extend an existing side-channel model validation framework, Scam-V, to check the vulnerability of programs to Spectre-PHT attacks and optimize the protection of programs using the slh approach.
arXiv Detail & Related papers (2023-12-15T13:16:50Z) - Assessing the Impact of a Supervised Classification Filter on Flow-based
Hybrid Network Anomaly Detection [0.0]
This paper aims to measure the impact of a supervised filter (classifier) in network anomaly detection.
We extend a state-of-the-art autoencoder-based anomaly detection method by prepending a binary classifier acting as a prefilter for the anomaly detector.
Our empirical results indicate that the hybrid approach does offer a higher detection rate of known attacks than a standalone anomaly detector.
arXiv Detail & Related papers (2023-10-10T14:30:04Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Towards an Accurate and Secure Detector against Adversarial
Perturbations [58.02078078305753]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition of natural-artificial data.
We propose an accurate and secure adversarial example detector, relying on a spatial-frequency discriminative decomposition with secret keys.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Adversarially Robust One-class Novelty Detection [83.1570537254877]
We show that existing novelty detectors are susceptible to adversarial examples.
We propose a defense strategy that manipulates the latent space of novelty detectors to improve the robustness against adversarial examples.
arXiv Detail & Related papers (2021-08-25T10:41:29Z) - Adversarial Attacks and Mitigation for Anomaly Detectors of
Cyber-Physical Systems [6.417955560857806]
In this work, we present an adversarial attack that simultaneously evades the anomaly detectors and rule checkers of a CPS.
Inspired by existing gradient-based approaches, our adversarial attack crafts noise over the sensor and actuator values, then uses a genetic algorithm to optimise the latter.
We implement our approach for two real-world critical infrastructure testbeds, successfully reducing the classification accuracy of their detectors by over 50% on average.
arXiv Detail & Related papers (2021-05-22T12:19:03Z) - Learning-Based Vulnerability Analysis of Cyber-Physical Systems [10.066594071800337]
This work focuses on the use of deep learning for vulnerability analysis of cyber-physical systems.
We consider a control architecture widely used in CPS (e.g., robotics) where the low-level control is based on e.g., the extended Kalman filter (EKF) and an anomaly detector.
To facilitate analyzing the impact potential sensing attacks could have, our objective is to develop learning-enabled attack generators.
arXiv Detail & Related papers (2021-03-10T06:52:26Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.