SV-LIB 1.0: A Standard Exchange Format for Software-Verification Tasks
- URL: http://arxiv.org/abs/2511.21509v1
- Date: Wed, 26 Nov 2025 15:44:54 GMT
- Title: SV-LIB 1.0: A Standard Exchange Format for Software-Verification Tasks
- Authors: Dirk Beyer, Gidon Ernst, Martin Jonáš, Marian Lingsch-Rosenfeld,
- Abstract summary: SV-LIB is an exchange format and intermediate language for software-verification tasks.<n>This paper presents version 1.0 of the SV-LIBformat, including its design goals, the syntax, and informal semantics.
- Score: 2.5374060352463697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past two decades, significant research and development effort went into the development of verification tools for individual languages, such asC, C++, and Java. Many of the used verification approaches are in fact language-agnostic and it would be beneficial for the technology transfer to allow for using the implementations also for other programming and modeling languages. To address the problem, we propose SV-LIB, an exchange format and intermediate language for software-verification tasks, including programs, specifications, and verification witnesses. SV-LIBis based on well-known concepts from imperative programming languages and uses SMT-LIB to represent expressions and sorts used in the program. This makes it easy to parse and to build into existing infrastructure, since many verification tools are based on SMT solvers already. Furthermore, SV-LIBdefines a witness format for both correct and incorrect SV-LIB programs, together with means for specifying witness-validation tasks. This makes it possible both to implement independent witness validators and to reuse some verifiers also as validators for witnesses. This paper presents version 1.0 of the SV-LIBformat, including its design goals, the syntax, and informal semantics. Formal semantics and further extensions to concurrency are planned for future versions.
Related papers
- Evaluating Large Language Models for Code Translation: Effects of Prompt Language and Prompt Design [0.0]
Large language models (LLMs) have shown promise for automated source-code translation.<n>Yet comparative evidence on how model choice, prompt design, and prompt language shape translation quality remains limited.<n>This study conducts a systematic empirical assessment of state-of-the-art LLMs for code translation among C++, Java, Python, and C#.
arXiv Detail & Related papers (2025-09-16T11:30:10Z) - What Challenges Do Developers Face When Using Verification-Aware Programming Languages? [43.72088093637808]
In software development, increasing software reliability often involves testing.<n>For complex and critical systems, developers can use Design by Contract (DbC) methods to define precise specifications that software components must satisfy.<n> Verification-Aware (VA) programming languages support DbC and formal verification at compile-time or run-time, offering stronger correctness guarantees than traditional testing.
arXiv Detail & Related papers (2025-06-30T10:17:39Z) - Correctness Witnesses with Function Contracts [0.1499944454332829]
We propose an extension to the existing witness format 2.0 to allow for the specification of function contracts.<n>This allows for the export of more information from tools and for the exchange of information with tools that require function contracts.
arXiv Detail & Related papers (2025-01-21T17:27:59Z) - SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition [77.28814034644287]
We propose SVTRv2, a CTC model endowed with the ability to handle text irregularities and model linguistic context.<n>We extensively evaluate SVTRv2 in both standard and recent challenging benchmarks.<n> SVTRv2 surpasses most EDTRs across the scenarios in terms of accuracy and inference speed.
arXiv Detail & Related papers (2024-11-24T14:21:35Z) - AdaCCD: Adaptive Semantic Contrasts Discovery Based Cross Lingual
Adaptation for Code Clone Detection [69.79627042058048]
AdaCCD is a novel cross-lingual adaptation method that can detect cloned codes in a new language without annotations in that language.
We evaluate the cross-lingual adaptation results of AdaCCD by constructing a multilingual code clone detection benchmark consisting of 5 programming languages.
arXiv Detail & Related papers (2023-11-13T12:20:48Z) - ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit [61.52122386938913]
ESPnet-ST-v2 is a revamp of the open-source ESPnet-ST toolkit.
This paper describes the overall design, example models for each task, and performance benchmarking behind ESPnet-ST-v2.
arXiv Detail & Related papers (2023-04-10T14:05:22Z) - Code Comment Inconsistency Detection with BERT and Longformer [9.378041196272878]
Comments, or natural language descriptions of source code, are standard practice among software developers.
When the code is modified without an accompanying correction to the comment, an inconsistency between the comment and code can arise.
We propose two models to detect such inconsistencies in a natural language inference (NLI) context.
arXiv Detail & Related papers (2022-07-29T02:43:51Z) - EASE: Entity-Aware Contrastive Learning of Sentence Embedding [37.7055989762122]
EASE is a novel method for learning sentence embeddings via contrastive learning between sentences and their related entities.
We show that EASE exhibits competitive or better performance in English semantic textual similarity (STS) and short text clustering (STC) tasks.
arXiv Detail & Related papers (2022-05-09T13:22:44Z) - AVATAR: A Parallel Corpus for Java-Python Program Translation [77.86173793901139]
Program translation refers to migrating source code from one language to another.
We present AVATAR, a collection of 9,515 programming problems and their solutions written in two popular languages, Java and Python.
arXiv Detail & Related papers (2021-08-26T05:44:20Z) - Zero-Shot Cross-lingual Semantic Parsing [56.95036511882921]
We study cross-lingual semantic parsing as a zero-shot problem without parallel data for 7 test languages.
We propose a multi-task encoder-decoder model to transfer parsing knowledge to additional languages using only English-Logical form paired data.
Our system frames zero-shot parsing as a latent-space alignment problem and finds that pre-trained models can be improved to generate logical forms with minimal cross-lingual transfer penalty.
arXiv Detail & Related papers (2021-04-15T16:08:43Z) - VECO: Variable and Flexible Cross-lingual Pre-training for Language
Understanding and Generation [77.82373082024934]
We plug a cross-attention module into the Transformer encoder to explicitly build the interdependence between languages.
It can effectively avoid the degeneration of predicting masked words only conditioned on the context in its own language.
The proposed cross-lingual model delivers new state-of-the-art results on various cross-lingual understanding tasks of the XTREME benchmark.
arXiv Detail & Related papers (2020-10-30T03:41:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.