SNIP: Speculative Execution and Non-Interference Preservation for Compiler Transformations
- URL: http://arxiv.org/abs/2407.15080v1
- Date: Sun, 21 Jul 2024 07:30:30 GMT
- Title: SNIP: Speculative Execution and Non-Interference Preservation for Compiler Transformations
- Authors: Sören van der Wall, Roland Meyer,
- Abstract summary: We address the problem of preserving non-interference across compiler transformations under speculative semantics.
We develop a proof method that ensures the preservation uniformly across all source programs.
- Score: 0.15800607910450126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the problem of preserving non-interference across compiler transformations under speculative semantics. We develop a proof method that ensures the preservation uniformly across all source programs. The basis of our proof method is a new form of simulation relation. It operates over directives that model the attacker's control over the micro-architectural state, and it accounts for the fact that the compiler transformation may change the influence of the micro-architectural state on the execution (and hence the directives). Using our proof method, we show the correctness of dead code elimination. When we tried to prove register allocation correct, we identified a previously unknown weakness that introduces violations to non-interference. We have confirmed the weakness for a mainstream compiler on code from the libsodium cryptographic library. To reclaim security once more, we develop a novel static analysis that operates on a product of source program and register-allocated program. Using the analysis, we present an automated fix to existing register allocation implementations. We prove the correctness of the fixed register allocations with our proof method.
Related papers
- Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - FoC: Figure out the Cryptographic Functions in Stripped Binaries with LLMs [54.27040631527217]
We propose a novel framework called FoC to Figure out the Cryptographic functions in stripped binaries.
FoC-BinLLM outperforms ChatGPT by 14.61% on the ROUGE-L score.
FoC-Sim outperforms the previous best methods with a 52% higher Recall@1.
arXiv Detail & Related papers (2024-03-27T09:45:33Z) - SECOMP: Formally Secure Compilation of Compartmentalized C Programs [2.5553752304478574]
Undefined behavior in C often causes devastating security vulnerabilities.
We introduce SECOMP, a compiler for compartmentalized C code with machine-checked proofs.
This is the first time such a strong criterion is proven for a mainstream programming language.
arXiv Detail & Related papers (2024-01-29T16:32:36Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [102.00359477458029]
We present a neuro-symbolic integration method, in which a neural LLM is used to represent the knowledge of the problem.
An LLM-free symbolic solver is adopted to do deliberative reasoning using the knowledge.
arXiv Detail & Related papers (2023-11-16T11:26:21Z) - Leveraging Large Language Models for Automated Proof Synthesis in Rust [6.202137610101939]
Large Language Models (LLMs) have shown success in code analysis and synthesis.
We present a combination of LLMs and static analysis to synthesize invariants, assertions, and other proof structures for a Rust-based formal verification framework called Verus.
Our prototype decomposes the verification task into multiple smaller ones, iteratively queries GPT-4, and combines its output with lightweight static analysis.
arXiv Detail & Related papers (2023-11-07T05:47:47Z) - A Scalable Formal Verification Methodology for Data-Oblivious Hardware [3.518548208712866]
We propose a novel methodology to formally verify data-oblivious behavior in hardware using standard property checking techniques.
We show that proving this inductive property is sufficient to exhaustively verify data-obliviousness at the microarchitectural level.
One case study uncovered a data-dependent timing violation in the extensively verified and highly secure IBEX RISC-V core.
arXiv Detail & Related papers (2023-08-15T13:19:17Z) - LeanDojo: Theorem Proving with Retrieval-Augmented Language Models [72.54339382005732]
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean.
Existing methods are difficult to reproduce or build on, due to private code, data, and compute requirements.
This paper introduces LeanDojo: an open-source Lean toolkit consisting of toolkits, data, models.
We develop ReProver: an LLM-based prover augmented with retrieval for selecting premises from a vast math library.
arXiv Detail & Related papers (2023-06-27T17:05:32Z) - Formalizing Stack Safety as a Security Property [0.6466206145151128]
We propose a new formal characterization of stack safety using concepts from language-based security.
This formulation is motivated by a particular class of enforcement mechanisms, the "lazy" stack safety micro-policies studied by Roessler and DeHon.
We validate the properties by using them to distinguish between correct and incorrect implementations of Roessler and DeHon's micro-policies.
arXiv Detail & Related papers (2021-05-02T08:18:34Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Towards a Theoretical Understanding of the Robustness of Variational
Autoencoders [82.68133908421792]
We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations.
We develop a novel criterion for robustness in probabilistic models: $r$-robustness.
We show that VAEs trained using disentangling methods score well under our robustness metrics.
arXiv Detail & Related papers (2020-07-14T21:22:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.