Detecting speculative data flow vulnerabilities using weakest precondition reasoning
- URL: http://arxiv.org/abs/2504.19128v1
- Date: Sun, 27 Apr 2025 07:02:15 GMT
- Title: Detecting speculative data flow vulnerabilities using weakest precondition reasoning
- Authors: Graeme Smith,
- Abstract summary: We introduce an approach for detecting the data flow vulnerabilities, Spectre-STL and Spectre-PSF, using weakest precondition reasoning.<n>We validate our approach on a suite of litmus tests used to validate related approaches in the literature.
- Score: 4.713817702376467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Speculative execution is a hardware optimisation technique where a processor, while waiting on the completion of a computation required for an instruction, continues to execute later instructions based on a predicted value of the pending computation. It came to the forefront of security research in 2018 with the disclosure of two related attacks, Spectre and Meltdown. Since then many similar attacks have been identified. While there has been much research on using formal methods to detect speculative execution vulnerabilities based on predicted control flow, there has been significantly less on vulnerabilities based on predicted data flow. In this paper, we introduce an approach for detecting the data flow vulnerabilities, Spectre-STL and Spectre-PSF, using weakest precondition reasoning. We validate our approach on a suite of litmus tests used to validate related approaches in the literature.
Related papers
- SILENT: A New Lens on Statistics in Software Timing Side Channels [10.872605368135343]
Recent attacks have challenged our understanding of what it means for code to execute in constant time on modern CPUs.<n>We introduce a new algorithm for the analysis of timing measurements with strong, formal statistical guarantees.<n>We demonstrate the necessity, effectiveness, and benefits of our approach on both synthetic benchmarks and real-world applications.
arXiv Detail & Related papers (2025-04-28T14:22:23Z) - Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method [108.56493934296687]
We introduce a divergence-based calibration method, inspired by the divergence-from-randomness concept, to calibrate token probabilities for pretraining data detection.<n>We have developed a Chinese-language benchmark, PatentMIA, to assess the performance of detection approaches for LLMs on Chinese text.
arXiv Detail & Related papers (2024-09-23T07:55:35Z) - Learning-Based Approaches to Predictive Monitoring with Conformal
Statistical Guarantees [2.1684857243537334]
This tutorial focuses on efficient methods to predictive monitoring (PM)
PM is the problem of detecting future violations of a given requirement from the current state of a system.
We present a general and comprehensive framework summarizing our approach to the predictive monitoring of CPSs.
arXiv Detail & Related papers (2023-12-04T15:16:42Z) - Token-Level Adversarial Prompt Detection Based on Perplexity Measures
and Contextual Information [67.78183175605761]
Large Language Models are susceptible to adversarial prompt attacks.
This vulnerability underscores a significant concern regarding the robustness and reliability of LLMs.
We introduce a novel approach to detecting adversarial prompts at a token level.
arXiv Detail & Related papers (2023-11-20T03:17:21Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - LMD: Light-weight Prediction Quality Estimation for Object Detection in
Lidar Point Clouds [3.927702899922668]
Object detection on Lidar point cloud data is a promising technology for autonomous driving and robotics.
Uncertainty estimation is a crucial component for down-stream tasks and deep neural networks remain error-prone even for predictions with high confidence.
We propose LidarMetaDetect, a light-weight post-processing scheme for prediction quality estimation.
Our experiments show a significant increase of statistical reliability in separating true from false predictions.
arXiv Detail & Related papers (2023-06-13T15:13:29Z) - Short Paper: Static and Microarchitectural ML-Based Approaches For
Detecting Spectre Vulnerabilities and Attacks [0.0]
Spectre intrusions exploit speculative execution design vulnerabilities in modern processors.
Current state-of-the-art detection techniques utilize micro-architectural features or vulnerable speculative code to detect these threats.
We present the first comprehensive evaluation of static and microarchitectural analysis-assisted machine learning approaches to detect Spectre vulnerabilities.
arXiv Detail & Related papers (2022-10-26T03:55:39Z) - ADDMU: Detection of Far-Boundary Adversarial Examples with Data and
Model Uncertainty Estimation [125.52743832477404]
Adversarial Examples Detection (AED) is a crucial defense technique against adversarial attacks.
We propose a new technique, textbfADDMU, which combines two types of uncertainty estimation for both regular and FB adversarial example detection.
Our new method outperforms previous methods by 3.6 and 6.0 emphAUC points under each scenario.
arXiv Detail & Related papers (2022-10-22T09:11:12Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.