Libra: Architectural Support For Principled, Secure And Efficient Balanced Execution On High-End Processors (Extended Version)
- URL: http://arxiv.org/abs/2409.03743v1
- Date: Thu, 5 Sep 2024 17:56:19 GMT
- Title: Libra: Architectural Support For Principled, Secure And Efficient Balanced Execution On High-End Processors (Extended Version)
- Authors: Hans Winderix, Marton Bognar, Lesly-Ann Daniel, Frank Piessens,
- Abstract summary: Control-flow leakage (CFL) attacks enable an attacker to expose control-flow decisions of a victim program via side-channel observations.
Linearization has been widely believed to be the only effective countermeasure against CFL attacks.
We propose Libra, a generic and principled hardware-software codesign to efficiently address CFL on high-end processors.
- Score: 9.404954747748523
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Control-flow leakage (CFL) attacks enable an attacker to expose control-flow decisions of a victim program via side-channel observations. Linearization (i.e., elimination) of secret-dependent control flow is the main countermeasure against these attacks, yet it comes at a non-negligible cost. Conversely, balancing secret-dependent branches often incurs a smaller overhead, but is notoriously insecure on high-end processors. Hence, linearization has been widely believed to be the only effective countermeasure against CFL attacks. In this paper, we challenge this belief and investigate an unexplored alternative: how to securely balance secret-dependent branches on higher-end processors? We propose Libra, a generic and principled hardware-software codesign to efficiently address CFL on high-end processors. We perform a systematic classification of hardware primitives leaking control flow from the literature, and provide guidelines to handle them with our design. Importantly, Libra enables secure control-flow balancing without the need to disable performance-critical hardware such as the instruction cache and the prefetcher. We formalize the semantics of Libra and propose a code transformation algorithm for securing programs, which we prove correct and secure. Finally, we implement and evaluate Libra on an out-of-order RISC-V processor, showing performance overhead on par with insecure balanced code, and outperforming state-of-the-art linearized code by 19.3%.
Related papers
- The Unlikely Hero: Nonideality in Analog Photonic Neural Networks as Built-in Defender Against Adversarial Attacks [7.042495891256446]
adversarial robustness of photonic analog mixed-signal AI hardware remains unexplored.
Our framework proactively protects sensitive weights via pre-attack unary weight encoding and post-attack vulnerability-aware weight locking.
Our framework maintains near-ideal on-chip inference accuracy under adversarial bit-flip attacks with merely 3% memory overhead.
arXiv Detail & Related papers (2024-10-02T07:27:26Z) - A Scheduling-Aware Defense Against Prefetching-Based Side-Channel Attacks [16.896693436047137]
Speculative loading of memory, called prefetching, is common in real-world CPUs.
Prefetching can be exploited to bypass process isolation and leak secrets, such as keys used in RSA, AES, and ECDH implementations.
We implement our countermeasure for an x86_64 and an ARM processor.
arXiv Detail & Related papers (2024-10-01T07:12:23Z) - The Impact of SBOM Generators on Vulnerability Assessment in Python: A Comparison and a Novel Approach [56.4040698609393]
Software Bill of Materials (SBOM) has been promoted as a tool to increase transparency and verifiability in software composition.
Current SBOM generation tools often suffer from inaccuracies in identifying components and dependencies.
We propose PIP-sbom, a novel pip-inspired solution that addresses their shortcomings.
arXiv Detail & Related papers (2024-09-10T10:12:37Z) - Providing High-Performance Execution with a Sequential Contract for Cryptographic Programs [3.34371579019566]
Constant-time programming is a widely deployed approach to harden cryptographic programs against side channel attacks.
Modern processors violate the underlying assumptions of constant-time policies by speculatively executing unintended paths of the program.
We propose Cassandra, a novel hardware-software mechanism to protect constant-time cryptographic code against speculative control flow based attacks.
arXiv Detail & Related papers (2024-06-06T17:34:48Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - LightFAt: Mitigating Control-flow Explosion via Lightweight PMU-based Control-flow Attestation [0.9999629695552195]
Remote execution often deals with sensitive data or executes proprietary software.
It ensures the code is executed in a non-compromised environment by calculating a potentially large sequence of cryptographic hash values.
In this work, we propose LightFAt: a Lightweight Control Flow scheme.
arXiv Detail & Related papers (2024-04-03T09:55:15Z) - A Novel Approach to Identify Security Controls in Source Code [4.598579706242066]
This paper enumerates a comprehensive list of commonly used security controls and creates a dataset for each one of them.
It uses the state-of-the-art NLP technique Bidirectional Representations from Transformers (BERT) and the Tactic Detector from our prior work to show that security controls could be identified with high confidence.
arXiv Detail & Related papers (2023-07-10T21:14:39Z) - Actor-Critic based Improper Reinforcement Learning [61.430513757337486]
We consider an improper reinforcement learning setting where a learner is given $M$ base controllers for an unknown Markov decision process.
We propose two algorithms: (1) a Policy Gradient-based approach; and (2) an algorithm that can switch between a simple Actor-Critic scheme and a Natural Actor-Critic scheme.
arXiv Detail & Related papers (2022-07-19T05:55:02Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.