An Agile Formal Specification Language Design Based on K Framework
- URL: http://arxiv.org/abs/2404.18515v1
- Date: Mon, 29 Apr 2024 09:00:50 GMT
- Title: An Agile Formal Specification Language Design Based on K Framework
- Authors: Jianyu Zhang, Long Zhang, Yixuan Wu, Feng Yang,
- Abstract summary: This paper introduces an Agile Formal Specification Language (ASL)
The design of ASL incorporates agile design principles, making the writing of formal specifications simpler, more efficient, and scalable.
A specification translation algorithm is developed, capable of converting ASL into K formal specification language that can be executed for verification.
- Score: 11.042686307366818
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Formal Methods (FMs) are currently essential for verifying the safety and reliability of software systems. However, the specification writing in formal methods tends to be complex and challenging to learn, requiring familiarity with various intricate formal specification languages and verification technologies. In response to the increasing complexity of software frameworks, existing specification writing methods fall short in meeting agility requirements. To address this, this paper introduces an Agile Formal Specification Language (ASL). The ASL is defined based on the K Framework and YAML Ain't Markup Language (YAML). The design of ASL incorporates agile design principles, making the writing of formal specifications simpler, more efficient, and scalable. Additionally, a specification translation algorithm is developed, capable of converting ASL into K formal specification language that can be executed for verification. Experimental evaluations demonstrate that the proposed method significantly reduces the code size needed for specification writing, enhancing agility in formal specification writing.
Related papers
- Evaluating LLM-driven User-Intent Formalization for Verification-Aware Languages [6.0608817611709735]
We propose a metric for evaluating the quality of specifications for verification-aware languages.
We show that our metric agrees closely on a human-labeled dataset of Dafny specifications for the popular MBPP code-generation benchmark.
We also outline formal verification challenges that need to be addressed to apply the technique more widely.
arXiv Detail & Related papers (2024-06-14T06:52:08Z) - Identifying and Analyzing Task-Encoding Tokens in Large Language Models [55.03191279766383]
In this paper, we identify and analyze task-encoding tokens on whose representations the task performance depends.
We show that template and stopword tokens are the most prone to be task-encoding.
Our work sheds light on how large language models (LLMs) learn to perform a task from demonstrations, deepens our understanding of the varied roles different types of tokens play in LLMs, and provides insights for avoiding instability from improperly utilizing task-encoding tokens.
arXiv Detail & Related papers (2024-01-20T20:55:21Z) - SpecGen: Automated Generation of Formal Program Specifications via Large Language Models [20.36964281778921]
SpecGen is a novel technique for formal program specification generation based on Large Language Models.
We evaluate SpecGen on two datasets, including the SV-COMP 279 benchmark and a manually constructed dataset.
arXiv Detail & Related papers (2024-01-16T20:13:50Z) - How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering [52.86931192259096]
Knowledge Base Question Answering (KBQA) aims to answer natural language questions based on facts in knowledge bases.
Recent works leverage the capabilities of large language models (LLMs) for logical form generation to improve performance.
arXiv Detail & Related papers (2024-01-11T09:27:50Z) - Validation of Rigorous Requirements Specifications and Document
Automation with the ITLingo RSL Language [0.0]
ITLingo initiative has introduced a requirements specification language named RSL to enhance the rigor and consistency of technical documentation.
This paper reviews existing research and tools in the fields of requirements validation and document automation.
We propose to extend RSL with validation of specifications based on customized checks, and on linguistic rules dynamically defined in the RSL itself.
arXiv Detail & Related papers (2023-12-17T21:39:26Z) - Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting [68.19544657508509]
Large language models (LLMs) are adopted as a fundamental component of language technologies.
We find that several widely used open-source LLMs are extremely sensitive to subtle changes in prompt format in few-shot settings.
We propose an algorithm that rapidly evaluates a sampled set of plausible prompt formats for a given task, and reports the interval of expected performance without accessing model weights.
arXiv Detail & Related papers (2023-10-17T15:03:30Z) - Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - nl2spec: Interactively Translating Unstructured Natural Language to
Temporal Logics with Large Language Models [3.1143846686797314]
We present nl2spec, a framework for applying Large Language Models (LLMs) derive formal specifications from unstructured natural language.
We introduce a new methodology to detect and resolve the inherent ambiguity of system requirements in natural language.
Users iteratively add, delete, and edit these sub-translations to amend erroneous formalizations, which is easier than manually redrafting the entire formalization.
arXiv Detail & Related papers (2023-03-08T20:08:53Z) - Prompting Language Models for Linguistic Structure [73.11488464916668]
We present a structured prompting approach for linguistic structured prediction tasks.
We evaluate this approach on part-of-speech tagging, named entity recognition, and sentence chunking.
We find that while PLMs contain significant prior knowledge of task labels due to task leakage into the pretraining corpus, structured prompting can also retrieve linguistic structure with arbitrary labels.
arXiv Detail & Related papers (2022-11-15T01:13:39Z) - Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified
Multilingual Prompt [98.26682501616024]
We propose a novel model that uses a unified prompt for all languages, called UniPrompt.
The unified prompt is computation by a multilingual PLM to produce language-independent representation.
Our proposed methods can significantly outperform the strong baselines across different languages.
arXiv Detail & Related papers (2022-02-23T11:57:52Z) - From English to Signal Temporal Logic [7.6398837478968025]
We propose DeepSTL, a tool and technique for the translation of informal requirements, given as free English sentences, into Signal Temporal Logic (STL), a formal specification language for cyber-physical systems.
A major challenge to devise such a translator is the lack of publicly available informal requirements and formal specifications.
In the second step, we use a state-of-the-art transformer-based neural translation technique, to train an accurate attentional translator of English to STL.
arXiv Detail & Related papers (2021-09-21T16:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.