Formal verification in Solidity and Move: insights from a comparative analysis
- URL: http://arxiv.org/abs/2502.13929v1
- Date: Wed, 19 Feb 2025 18:06:01 GMT
- Title: Formal verification in Solidity and Move: insights from a comparative analysis
- Authors: Massimo Bartoletti, Silvia Crafa, Enrico Lipparini,
- Abstract summary: Solidity and Move are two contract languages with different designs and approaches to verification.<n>This paper investigates how the two languages impact verification, and what is the state-of-the-art of verification tools for the two languages.<n>Our investigation is supported by an open dataset of verification tasks performed in Certora and in the Aptos Move Prover.
- Score: 0.40964539027092906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Formal verification plays a crucial role in making smart contracts safer, being able to find bugs or to guarantee their absence, as well as checking whether the business logic is correctly implemented. For Solidity, even though there already exist several mature verification tools, the semantical quirks of the language can make verification quite hard in practice. Move, on the other hand, has been designed with security and verification in mind, and it has been accompanied since its early stages by a formal verification tool, the Move Prover. In this paper, we investigate through a comparative analysis: 1) how the different designs of the two contract languages impact verification, and 2) what is the state-of-the-art of verification tools for the two languages, and how do they compare on three paradigmatic use cases. Our investigation is supported by an open dataset of verification tasks performed in Certora and in the Aptos Move Prover.
Related papers
- VerifiAgent: a Unified Verification Agent in Language Model Reasoning [10.227089771963943]
We propose a unified verification agent that integrates two levels of verification: meta-verification and tool-based adaptive verification.
VerifiAgent autonomously selects appropriate verification tools based on the reasoning type.
It can be effectively applied to inference scaling, achieving better results with fewer generated samples and costs.
arXiv Detail & Related papers (2025-04-01T04:05:03Z) - A SAT-based approach to rigorous verification of Bayesian networks [13.489622701621698]
We introduce a verification framework tailored for Bayesian networks, designed to address these drawbacks.
Our framework comprises two key components: (1) a two-step compilation and encoding scheme that translates Bayesian networks into Boolean logic literals, and (2) formal verification queries that leverage these literals to verify various properties encoded as constraints.
We benchmark the efficiency of our verification scheme and demonstrate its practical utility in real-world scenarios.
arXiv Detail & Related papers (2024-08-02T03:06:51Z) - Large Language Models for Judicial Entity Extraction: A Comparative Study [0.0]
This research investigates the application of Large Language Models in identifying domain-specific entities within case law documents.
The study evaluates the performance of state-of-the-art Large Language Model architectures, including Large Language Model Meta AI 3, Mistral, and Gemma.
arXiv Detail & Related papers (2024-07-08T09:49:03Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Enhancing Translation Validation of Compiler Transformations with Large
Language Models [5.103162976634333]
This paper presents a framework that integrates Large Language Models (LLMs) into translation validation.
We first utilize existing formal verification tools for translation validation.
When formal verification tools are unable to confirm a transformation's soundness, our framework employs fine-tuned LLMs for prediction.
arXiv Detail & Related papers (2024-01-30T07:24:04Z) - Do Language Models Learn about Legal Entity Types during Pretraining? [4.604003661048267]
We show that Llama2 performs well on certain entities and exhibits potential for substantial improvement with optimized prompt templates.
Llama2 appears to frequently overlook syntactic cues, a shortcoming less present in BERT-based architectures.
arXiv Detail & Related papers (2023-10-19T18:47:21Z) - Trusta: Reasoning about Assurance Cases with Formal Methods and Large
Language Models [4.005483185111992]
Trustworthiness Derivation Tree Analyzer (Trusta) is a desktop application designed to automatically construct and verify TDTs.
It has a built-in Prolog interpreter in its backend, and is supported by the constraint solvers Z3 and MONA.
Trusta can extract formal constraints from text in natural languages, facilitating an easier interpretation and validation process.
arXiv Detail & Related papers (2023-09-22T15:42:43Z) - Give Me More Details: Improving Fact-Checking with Latent Retrieval [58.706972228039604]
Evidence plays a crucial role in automated fact-checking.
Existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine.
We propose to incorporate full text from source documents as evidence and introduce two enriched datasets.
arXiv Detail & Related papers (2023-05-25T15:01:19Z) - Towards Unsupervised Recognition of Token-level Semantic Differences in
Related Documents [61.63208012250885]
We formulate recognizing semantic differences as a token-level regression task.
We study three unsupervised approaches that rely on a masked language model.
Our results show that an approach based on word alignment and sentence-level contrastive learning has a robust correlation to gold labels.
arXiv Detail & Related papers (2023-05-22T17:58:04Z) - SAILER: Structure-aware Pre-trained Language Model for Legal Case
Retrieval [75.05173891207214]
Legal case retrieval plays a core role in the intelligent legal system.
Most existing language models have difficulty understanding the long-distance dependencies between different structures.
We propose a new Structure-Aware pre-traIned language model for LEgal case Retrieval.
arXiv Detail & Related papers (2023-04-22T10:47:01Z) - DialFact: A Benchmark for Fact-Checking in Dialogue [56.63709206232572]
We construct DialFact, a benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wikipedia.
We find that existing fact-checking models trained on non-dialogue data like FEVER fail to perform well on our task.
We propose a simple yet data-efficient solution to effectively improve fact-checking performance in dialogue.
arXiv Detail & Related papers (2021-10-15T17:34:35Z) - Claim Check-Worthiness Detection as Positive Unlabelled Learning [53.24606510691877]
Claim check-worthiness detection is a critical component of fact checking systems.
We illuminate a central challenge in claim check-worthiness detection underlying all of these tasks.
Our best performing method is a unified approach which automatically corrects for this using a variant of positive unlabelled learning.
arXiv Detail & Related papers (2020-03-05T16:06:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.