User-Guided Verification of Security Protocols via Sound Animation
- URL: http://arxiv.org/abs/2410.00676v1
- Date: Tue, 1 Oct 2024 13:34:35 GMT
- Title: User-Guided Verification of Security Protocols via Sound Animation
- Authors: Kangfeng Ye, Roberto Metere, Poonam Yadav,
- Abstract summary: This paper implements the Dolev-Yao attack model using a variant of CSP based on Interaction Trees (ITrees) to compile protocols into animators.
To guarantee the soundness of our compilation, we mechanised our approach in the theorem prover Isabelle/HOL.
- Score: 3.094188181179751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current formal verification of security protocols relies on specialized researchers and complex tools, inaccessible to protocol designers who informally evaluate their work with emulators. This paper addresses this gap by embedding symbolic analysis into the design process. Our approach implements the Dolev-Yao attack model using a variant of CSP based on Interaction Trees (ITrees) to compile protocols into animators -- executable programs that designers can use for debugging and inspection. To guarantee the soundness of our compilation, we mechanised our approach in the theorem prover Isabelle/HOL. As traditionally done with symbolic tools, we refer to the Diffie-Hellman key exchange and the Needham-Schroeder public-key protocol (and Lowe's patched variant). We demonstrate how our animator can easily reveal the mechanics of attacks and verify corrections. This work facilitates security integration at the design level and supports further security property analysis and software-engineered integrations.
Related papers
- Decompiling Smart Contracts with a Large Language Model [51.49197239479266]
Despite Etherscan's 78,047,845 smart contracts deployed on (as of May 26, 2025), a mere 767,520 ( 1%) are open source.<n>This opacity necessitates the automated semantic analysis of on-chain smart contract bytecode.<n>We introduce a pioneering decompilation pipeline that transforms bytecode into human-readable and semantically faithful Solidity code.
arXiv Detail & Related papers (2025-06-24T13:42:59Z) - Training Language Models to Generate Quality Code with Program Analysis Feedback [66.0854002147103]
Code generation with large language models (LLMs) is increasingly adopted in production but fails to ensure code quality.<n>We propose REAL, a reinforcement learning framework that incentivizes LLMs to generate production-quality code.
arXiv Detail & Related papers (2025-05-28T17:57:47Z) - Exposing Go's Hidden Bugs: A Novel Concolic Framework [2.676686591720132]
We present Zorya, a novel methodology to evaluate Go programs comprehensively.<n>By systematically exploring execution paths to uncover vulnerabilities beyond conventional testing, symbolic execution offers distinct advantages.<n>Our solution employs Ghidra's P-Code as an intermediate representation (IR)
arXiv Detail & Related papers (2025-05-26T16:26:20Z) - MARVEL: Multi-Agent RTL Vulnerability Extraction using Large Language Models [15.35860248847857]
Large Language Models (LLMs) have been used to assist during this task, either directly or in conjunction with existing tools.<n>We propose MARVEL, a multi-agent LLM framework for a unified approach to decision-making, tool use, and reasoning.
arXiv Detail & Related papers (2025-05-17T11:31:24Z) - Strands Rocq: Why is a Security Protocol Correct, Mechanically? [3.6840775431698893]
StrandsRocq is a mechanization of the strand spaces in Coq.
It incorporates new original proof techniques and a novel notion of maximal penetrator.
It provides a compositional proof of security for two simple authentication protocols.
arXiv Detail & Related papers (2025-02-18T13:34:58Z) - From Objects to Events: Unlocking Complex Visual Understanding in Object Detectors via LLM-guided Symbolic Reasoning [71.41062111470414]
The proposed plug-and-play framework interfaces with any open-vocabulary detector.
At its core, our approach combines (i) a symbolic regression mechanism exploring relationship patterns among detected entities.
We compared our training-free framework against specialized event recognition systems across diverse application domains.
arXiv Detail & Related papers (2025-02-09T10:30:54Z) - CryptoFormalEval: Integrating LLMs and Formal Verification for Automated Cryptographic Protocol Vulnerability Detection [41.94295877935867]
We introduce a benchmark to assess the ability of Large Language Models to autonomously identify vulnerabilities in new cryptographic protocols.
We created a dataset of novel, flawed, communication protocols and designed a method to automatically verify the vulnerabilities found by the AI agents.
arXiv Detail & Related papers (2024-11-20T14:16:55Z) - Games for AI Control: Models of Safety Evaluations of AI Deployment Protocols [52.40622903199512]
This paper introduces AI-Control Games, a formal decision-making model of the red-teaming exercise as a multi-objective, partially observable game.
We apply our formalism to model, evaluate and synthesise protocols for deploying untrusted language models as programming assistants.
arXiv Detail & Related papers (2024-09-12T12:30:07Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - Protocols to Code: Formal Verification of a Next-Generation Internet Router [9.971817718196997]
SCION routers run a cryptographic protocol for secure packet forwarding in an adversarial environment.
We verify both the protocol's network-wide security properties and low-level properties of its implementation.
This paper explains our approach, summarizes the main results, and distills lessons for the design and implementation of verifiable systems.
arXiv Detail & Related papers (2024-05-09T19:57:59Z) - Attack Tree Generation via Process Mining [0.0]
This work aims to provide a method for the automatic generation of Attack Trees from attack logs.
The main original feature of our approach is the use of Process Mining algorithms to synthesize Attack Trees.
Our approach is supported by a prototype that, apart from the derivation and translation of the model, provides the user with an Attack Tree in the RisQFLan format.
arXiv Detail & Related papers (2024-02-19T10:55:49Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Performance-lossless Black-box Model Watermarking [69.22653003059031]
We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
arXiv Detail & Related papers (2023-12-11T16:14:04Z) - Design for Assurance: Employing Functional Verification Tools for Thwarting Hardware Trojan Threat in 3PIPs [13.216074408064117]
Third-party intellectual property cores are essential building blocks of modern system-on-chip and integrated circuit designs.
These design components usually come from vendors of different trust levels and may contain undocumented design functionality.
We develop a method for identifying and preventing hardware Trojans, employing functional verification tools and languages familiar to hardware designers.
arXiv Detail & Related papers (2023-11-21T03:32:07Z) - A Novel Approach to Identify Security Controls in Source Code [4.598579706242066]
This paper enumerates a comprehensive list of commonly used security controls and creates a dataset for each one of them.
It uses the state-of-the-art NLP technique Bidirectional Representations from Transformers (BERT) and the Tactic Detector from our prior work to show that security controls could be identified with high confidence.
arXiv Detail & Related papers (2023-07-10T21:14:39Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.