ESBMC-Python: A Bounded Model Checker for Python Programs
- URL: http://arxiv.org/abs/2407.03472v1
- Date: Wed, 3 Jul 2024 19:38:14 GMT
- Title: ESBMC-Python: A Bounded Model Checker for Python Programs
- Authors: Bruno Farias, Rafael Menezes, Eddie B. de Lima Filho, Youcheng Sun, Lucas C. Cordeiro,
- Abstract summary: This paper introduces a tool for verifying Python programs.
It transforms an input program into an abstract syntax tree to infer and add type information.
It converts this description into formulae evaluated with satisfiability modulo theories solvers.
- Score: 8.980206368890013
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a tool for verifying Python programs, which, using type annotation and front-end processing, can harness the capabilities of a bounded model-checking (BMC) pipeline. It transforms an input program into an abstract syntax tree to infer and add type information. Then, it translates Python expressions and statements into an intermediate representation. Finally, it converts this description into formulae evaluated with satisfiability modulo theories (SMT) solvers. The proposed approach was realized with the efficient SMT-based bounded model checker (ESBMC), which resulted in a tool called ESBMC-Python, the first BMC-based Python-code verifier. Experimental results, with a test suite specifically developed for this purpose, showed its effectiveness, where successful and failed tests were correctly evaluated. Moreover, it found a real problem in the Ethereum Consensus Specification.
Related papers
- PyVeritas: On Verifying Python via LLM-Based Transpilation and Bounded Model Checking for C [18.52519530244078]
Python has become the dominant language for general-purpose programming, yet it lacks robust tools for formal verification.<n>We propose PyVeritas, a novel framework that leverages Large Language Models (LLMs) for high-level transpilation from Python to C.
arXiv Detail & Related papers (2025-08-11T16:49:07Z) - TypyBench: Evaluating LLM Type Inference for Untyped Python Repositories [9.127866457704162]
Large language models (LLMs) have shown promise in code understanding, but their type inference capabilities remain underexplored.<n>We introduce TypyBench, a benchmark designed to evaluate LLMs' type inference across entire Python repositories.<n>Our evaluation of various LLMs on a curated dataset of 50 high-quality Python repositories reveals that, although LLMs achieve decent TypeSim scores, they struggle with complex nested types and exhibit significant type consistency errors.
arXiv Detail & Related papers (2025-07-28T14:54:00Z) - PyPulse: A Python Library for Biosignal Imputation [58.35269251730328]
We introduce PyPulse, a Python package for imputation of biosignals in both clinical and wearable sensor settings.
PyPulse's framework provides a modular and extendable framework with high ease-of-use for a broad userbase, including non-machine-learning bioresearchers.
We released PyPulse under the MIT License on Github and PyPI.
arXiv Detail & Related papers (2024-12-09T11:00:55Z) - Chat-like Asserts Prediction with the Support of Large Language Model [34.140962210930624]
We introduce Chat-like execution-based Asserts Prediction (tool) for generating meaningful assert statements for Python projects.
tool utilizes the persona, Chain-of-Thought, and one-shot learning techniques in the prompt design, and conducts rounds of communication with LLM and Python interpreter.
Our evaluation demonstrates that tool achieves 64.7% accuracy for single assert statement generation and 62% for overall assert statement generation.
arXiv Detail & Related papers (2024-07-31T08:27:03Z) - pyvene: A Library for Understanding and Improving PyTorch Models via
Interventions [79.72930339711478]
$textbfpyvene$ is an open-source library that supports customizable interventions on a range of different PyTorch modules.
We show how $textbfpyvene$ provides a unified framework for performing interventions on neural models and sharing the intervened upon models with others.
arXiv Detail & Related papers (2024-03-12T16:46:54Z) - arfpy: A python package for density estimation and generative modeling
with adversarial random forests [1.3597551064547502]
This paper introduces $textitarfpy$, a python implementation of Adversarial Random Forests (ARF) (Watson et al., 2023)
It is a lightweight procedure for synthesizing new data that resembles some given data.
arXiv Detail & Related papers (2023-11-13T14:28:21Z) - Noisy Pair Corrector for Dense Retrieval [59.312376423104055]
We propose a novel approach called Noisy Pair Corrector (NPC)
NPC consists of a detection module and a correction module.
We conduct experiments on text-retrieval benchmarks Natural Question and TriviaQA, code-search benchmarks StaQC and SO-DS.
arXiv Detail & Related papers (2023-11-07T08:27:14Z) - SPRINT: A Unified Toolkit for Evaluating and Demystifying Zero-shot
Neural Sparse Retrieval [92.27387459751309]
We provide SPRINT, a unified Python toolkit for evaluating neural sparse retrieval.
We establish strong and reproducible zero-shot sparse retrieval baselines across the well-acknowledged benchmark, BEIR.
We show that SPLADEv2 produces sparse representations with a majority of tokens outside of the original query and document.
arXiv Detail & Related papers (2023-07-19T22:48:02Z) - PyVBMC: Efficient Bayesian inference in Python [8.924669503280333]
PyVBMC is a Python implementation of the Variational Bayesian Monte Carlo (VBMC) algorithm for posterior and model inference.
VBMC is designed for efficient parameter estimation and model assessment when model evaluations are mildly-to-very expensive.
arXiv Detail & Related papers (2023-03-16T17:37:22Z) - PyHHMM: A Python Library for Heterogeneous Hidden Markov Models [63.01207205641885]
PyHHMM is an object-oriented Python implementation of Heterogeneous-Hidden Markov Models (HHMMs)
PyHHMM emphasizes features not supported in similar available frameworks: a heterogeneous observation model, missing data inference, different model order selection criterias, and semi-supervised training.
PyHHMM relies on the numpy, scipy, scikit-learn, and seaborn Python packages, and is distributed under the Apache-2.0 License.
arXiv Detail & Related papers (2022-01-12T07:32:36Z) - Program Synthesis with Large Language Models [40.41120807053989]
We evaluate large language models for program synthesis in Python.
We find that synthesis performance scales log-linearly with model size.
We find that even our best models are generally unable to predict the output of a program given a specific input.
arXiv Detail & Related papers (2021-08-16T03:57:30Z) - Using Python for Model Inference in Deep Learning [0.6027358520885614]
We show how it is possible to meet performance and packaging constraints while performing inference in Python.
We present a way of using multiple Python interpreters within a single process to achieve scalable inference.
arXiv Detail & Related papers (2021-04-01T04:48:52Z) - pyBART: Evidence-based Syntactic Transformations for IE [52.93947844555369]
We present pyBART, an easy-to-use open-source Python library for converting English UD trees to Enhanced UD graphs or to our representation.
When evaluated in a pattern-based relation extraction scenario, our representation results in higher extraction scores than Enhanced UD, while requiring fewer patterns.
arXiv Detail & Related papers (2020-05-04T07:38:34Z) - OPFython: A Python-Inspired Optimum-Path Forest Classifier [68.8204255655161]
This paper proposes a Python-based Optimum-Path Forest framework, denoted as OPFython.
As OPFython is a Python-based library, it provides a more friendly environment and a faster prototyping workspace than the C language.
arXiv Detail & Related papers (2020-01-28T15:46:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.