K-ST: A Formal Executable Semantics of the Structured Text Language for
PLCs
- URL: http://arxiv.org/abs/2202.04076v2
- Date: Tue, 12 Sep 2023 02:05:17 GMT
- Title: K-ST: A Formal Executable Semantics of the Structured Text Language for
PLCs
- Authors: Kun Wang, Jingyi Wang, Christopher M. Poskitt, Xiangxiang Chen, Jun
Sun, and Peng Cheng
- Abstract summary: We develop K-ST, a formal executable semantics for Structured Text (ST) in the K framework.
K-ST is a high-level reference semantics that can be used to evaluate the correctness and consistency of different ST implementations.
We validate K-ST by executing 509 ST programs extracted from Github and comparing the results against existing commercial compilers.
- Score: 10.993724354322657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Programmable Logic Controllers (PLCs) are responsible for automating process
control in many industrial systems (e.g. in manufacturing and public
infrastructure), and thus it is critical to ensure that they operate correctly
and safely. The majority of PLCs are programmed in languages such as Structured
Text (ST). However, a lack of formal semantics makes it difficult to ascertain
the correctness of their translators and compilers, which vary from
vendor-to-vendor. In this work, we develop K-ST, a formal executable semantics
for ST in the K framework. Defined with respect to the IEC 61131-3 standard and
PLC vendor manuals, K-ST is a high-level reference semantics that can be used
to evaluate the correctness and consistency of different ST implementations. We
validate K-ST by executing 509 ST programs extracted from Github and comparing
the results against existing commercial compilers (i.e., CODESYS,
CX-Programmer, and GX Works2). We then apply K-ST to validate the
implementation of the open source OpenPLC platform, comparing the executions of
several test programs to uncover five bugs and nine functional defects in the
compiler.
Related papers
- EquiBench: Benchmarking Code Reasoning Capabilities of Large Language Models via Equivalence Checking [54.354203142828084]
We present the task of equivalence checking as a new way to evaluate the code reasoning abilities of large language models.
We introduce EquiBench, a dataset of 2400 program pairs spanning four programming languages and six equivalence categories.
Our evaluation of 17 state-of-the-art LLMs shows that OpenAI o3-mini achieves the highest overall accuracy of 78.0%.
arXiv Detail & Related papers (2025-02-18T02:54:25Z) - A Multi-Agent Framework for Extensible Structured Text Generation in PLCs [9.555744065377148]
A high-level language adhering to the IEC 61131-3 standard is pivotal for PLCs.
The lack of comprehensive and standardized documentation for the full semantics of ST has contributed to inconsistencies in how the language is implemented.
We present AutoPLC, an LLM-based approach designed to automate the generation of vendor-specific ST code.
arXiv Detail & Related papers (2024-12-03T12:05:56Z) - SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition [77.28814034644287]
We propose SVTRv2, a CTC model that beats leading EDTRs in both accuracy and inference speed.
SVTRv2 introduces novel upgrades to handle text irregularity and utilize linguistic context.
We evaluate SVTRv2 in both standard and recent challenging benchmarks.
arXiv Detail & Related papers (2024-11-24T14:21:35Z) - Evaluating the Ability of Large Language Models to Generate Verifiable Specifications in VeriFast [5.019791860882564]
This paper explores OpenAI's GPT-4o model's effectiveness in generating specifications on C programs that are verifiable with VeriFast.
Our results indicate that the specifications generated by GPT-4o preserve functional behavior, but struggle to be verifiable.
arXiv Detail & Related papers (2024-11-04T17:44:11Z) - Training LLMs for Generating IEC 61131-3 Structured Text with Online Feedback [0.0]
This paper proposes an approach to fine-tune LLMs for the generation of IEC 61131-3 Structured Text (ST) code.
The framework is highly suitable for industrial automation applications and outperforms state-of-the-art models.
arXiv Detail & Related papers (2024-10-29T15:54:09Z) - Agents4PLC: Automating Closed-loop PLC Code Generation and Verification in Industrial Control Systems using LLM-based Agents [27.097029139195943]
Agents4PLC is a novel framework that automates PLC code generation and code-level verification.
We first establish a benchmark for verifiable PLC code generation area.
We then transition from natural language requirements to human-written-verified formal specifications and reference PLC code.
arXiv Detail & Related papers (2024-10-18T06:51:13Z) - NoviCode: Generating Programs from Natural Language Utterances by Novices [59.71218039095155]
We present NoviCode, a novel NL Programming task which takes as input an API and a natural language description by a novice non-programmer.
We show that NoviCode is indeed a challenging task in the code synthesis domain, and that generating complex code from non-technical instructions goes beyond the current Text-to-Code paradigm.
arXiv Detail & Related papers (2024-07-15T11:26:03Z) - C-LLM: Learn to Check Chinese Spelling Errors Character by Character [61.53865964535705]
We propose C-LLM, a Large Language Model-based Chinese Spell Checking method that learns to check errors Character by Character.
C-LLM achieves an average improvement of 10% over existing methods.
arXiv Detail & Related papers (2024-06-24T11:16:31Z) - Towards Using Behavior Trees in Industrial Automation Controllers [41.94295877935867]
The Industry 4.0 paradigm manifests the shift towards mass customization and cyber-physical production systems.
There is a lack of PLC software flexibility and integration between low-level programs and high-level task-oriented control frameworks.
This paper proposes an approach for improving the industrial control software design by integrating Behavior Trees into PLC programs.
arXiv Detail & Related papers (2024-04-22T09:47:36Z) - Guess & Sketch: Language Model Guided Transpilation [59.02147255276078]
Learned transpilation offers an alternative to manual re-writing and engineering efforts.
Probabilistic neural language models (LMs) produce plausible outputs for every input, but do so at the cost of guaranteed correctness.
Guess & Sketch extracts alignment and confidence information from features of the LM then passes it to a symbolic solver to resolve semantic equivalence.
arXiv Detail & Related papers (2023-09-25T15:42:18Z) - Structured Chain-of-Thought Prompting for Code Generation [48.43888515848583]
Chain-of-Thought (CoT) prompting is the state-of-the-art prompting technique.
We propose Structured CoTs (SCoTs) and present a novel prompting technique for code generation, named SCoT prompting.
arXiv Detail & Related papers (2023-05-11T06:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.