A Multi-Agent Framework for Extensible Structured Text Generation in PLCs
- URL: http://arxiv.org/abs/2412.02410v1
- Date: Tue, 03 Dec 2024 12:05:56 GMT
- Title: A Multi-Agent Framework for Extensible Structured Text Generation in PLCs
- Authors: Donghao Yang, Aolang Wu, Tianyi Zhang, Li Zhang, Fang Liu, Xiaoli Lian, Yuming Ren, Jiaji Tian,
- Abstract summary: A high-level language adhering to the IEC 61131-3 standard is pivotal for PLCs.
The lack of comprehensive and standardized documentation for the full semantics of ST has contributed to inconsistencies in how the language is implemented.
We present AutoPLC, an LLM-based approach designed to automate the generation of vendor-specific ST code.
- Score: 9.555744065377148
- License:
- Abstract: Programmable Logic Controllers (PLCs) are microcomputers essential for automating factory operations. Structured Text (ST), a high-level language adhering to the IEC 61131-3 standard, is pivotal for PLCs due to its ability to express logic succinctly and to seamlessly integrate with other languages within the same standard. However, vendors develop their own customized versions of ST, and the lack of comprehensive and standardized documentation for the full semantics of ST has contributed to inconsistencies in how the language is implemented. Consequently, the steep learning curve associated with ST, combined with ever-evolving industrial requirements, presents significant challenges for developers. In response to these issues, we present AutoPLC, an LLM-based approach designed to automate the generation of vendor-specific ST code. To facilitate effective code generation, we first built a comprehensive knowledge base, including Rq2ST Case Library (requirements and corresponding implementations) and Instruction libraries. Then we developed a retrieval module to incorporate the domain-specific knowledge by identifying pertinent cases and instructions, guiding the LLM to generate code that meets the requirements. In order to verify and improve the quality of the generated code, we designed an adaptable code checker. If errors are detected, we initiate an iterative self-improvement process to instruct the LLM to revise the generated code. We evaluate AutoPLC's performance against seven state-of-the-art baselines using three benchmarks, one for open-source basic ST and two for commercial Structured Control Language (SCL) from Siemens. The results show that our approach consistently achieves superior performance across all benchmarks. Ablation study emphasizes the significance of our modules. Further manual analysis confirm the practical utility of the ST code generated by AutoPLC.
Related papers
- Exploring Code Language Models for Automated HLS-based Hardware Generation: Benchmark, Infrastructure and Analysis [49.998130983414924]
Large language models (LLMs) can be employed for programming languages such as Python and C++.
This paper explores leveraging LLMs to generate High-Level Synthesis (HLS)-based hardware design.
arXiv Detail & Related papers (2025-02-19T17:53:59Z) - Towards Specification-Driven LLM-Based Generation of Embedded Automotive Software [0.4369550829556578]
The paper studies how code generation by LLMs can be combined with formal verification to produce critical embedded software.
The goal is to automatically generate industrial-quality code from specifications only.
arXiv Detail & Related papers (2024-11-20T12:38:17Z) - Training LLMs for Generating IEC 61131-3 Structured Text with Online Feedback [0.0]
This paper proposes an approach to fine-tune LLMs for the generation of IEC 61131-3 Structured Text (ST) code.
The framework is highly suitable for industrial automation applications and outperforms state-of-the-art models.
arXiv Detail & Related papers (2024-10-29T15:54:09Z) - Agents4PLC: Automating Closed-loop PLC Code Generation and Verification in Industrial Control Systems using LLM-based Agents [27.097029139195943]
Agents4PLC is a novel framework that automates PLC code generation and code-level verification.
We first establish a benchmark for verifiable PLC code generation area.
We then transition from natural language requirements to human-written-verified formal specifications and reference PLC code.
arXiv Detail & Related papers (2024-10-18T06:51:13Z) - LLM can Achieve Self-Regulation via Hyperparameter Aware Generation [88.69052513433603]
Large Language Models (LLMs) employ diverse decoding strategies to control the generated text.
Are LLMs conscious of the existence of these decoding strategies and capable of regulating themselves?
We propose a novel text generation paradigm termed Hyperparameter Aware Generation (HAG)
arXiv Detail & Related papers (2024-02-17T11:18:22Z) - StepCoder: Improve Code Generation with Reinforcement Learning from
Compiler Feedback [58.20547418182074]
We introduce StepCoder, a novel framework for code generation, consisting of two main components.
CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks.
FGO only optimize the model by masking the unexecuted code segments to provide Fine-Grained Optimization.
Our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks.
arXiv Detail & Related papers (2024-02-02T13:14:31Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - Bridging Code Semantic and LLMs: Semantic Chain-of-Thought Prompting for
Code Generation [22.219645213202178]
This paper proposes the "Semantic Chain-of-Thought" approach to intruduce semantic information of code, named SeCoT.
We show that SeCoT can achieves state-of-the-art performance, greatly improving the potential for large models and code generation.
arXiv Detail & Related papers (2023-10-16T05:09:58Z) - CodeTF: One-stop Transformer Library for State-of-the-art Code LLM [72.1638273937025]
We present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence.
Our library supports a collection of pretrained Code LLM models and popular code benchmarks.
We hope CodeTF is able to bridge the gap between machine learning/generative AI and software engineering.
arXiv Detail & Related papers (2023-05-31T05:24:48Z) - Structured Chain-of-Thought Prompting for Code Generation [48.43888515848583]
Chain-of-Thought (CoT) prompting is the state-of-the-art prompting technique.
We propose Structured CoTs (SCoTs) and present a novel prompting technique for code generation, named SCoT prompting.
arXiv Detail & Related papers (2023-05-11T06:43:37Z) - K-ST: A Formal Executable Semantics of the Structured Text Language for
PLCs [10.993724354322657]
We develop K-ST, a formal executable semantics for Structured Text (ST) in the K framework.
K-ST is a high-level reference semantics that can be used to evaluate the correctness and consistency of different ST implementations.
We validate K-ST by executing 509 ST programs extracted from Github and comparing the results against existing commercial compilers.
arXiv Detail & Related papers (2022-02-08T17:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.