ChatGPT for PLC/DCS Control Logic Generation
- URL: http://arxiv.org/abs/2305.15809v1
- Date: Thu, 25 May 2023 07:46:53 GMT
- Title: ChatGPT for PLC/DCS Control Logic Generation
- Authors: Heiko Koziolek, Sten Gruener, Virendra Ashiwal
- Abstract summary: Large language models (LLMs) providing generative AI have become popular to support software engineers in creating, summarizing, optimizing, and documenting source code.
It is still unknown how LLMs can support control engineers using typical control programming languages in programming tasks.
We created 100 LLM prompts in 10 representative categories to analyze control logic generation for of PLCs and DCS from natural language.
- Score: 1.773257587850857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) providing generative AI have become popular to
support software engineers in creating, summarizing, optimizing, and
documenting source code. It is still unknown how LLMs can support control
engineers using typical control programming languages in programming tasks.
Researchers have explored GitHub CoPilot or DeepMind AlphaCode for source code
generation but did not yet tackle control logic programming. The contribution
of this paper is an exploratory study, for which we created 100 LLM prompts in
10 representative categories to analyze control logic generation for of PLCs
and DCS from natural language. We tested the prompts by generating answers with
ChatGPT using the GPT-4 LLM. It generated syntactically correct IEC 61131-3
Structured Text code in many cases and demonstrated useful reasoning skills
that could boost control engineer productivity. Our prompt collection is the
basis for a more formal LLM benchmark to test and compare such models for
control logic generation.
Related papers
- Adaptable Logical Control for Large Language Models [68.27725600175013]
Ctrl-G is an adaptable framework that facilitates tractable and flexible control of model generation at inference time.
We show that Ctrl-G, when applied to a TULU2-7B model, outperforms GPT3.5 and GPT4 on the task of interactive text editing.
arXiv Detail & Related papers (2024-06-19T23:47:59Z) - InfiBench: Evaluating the Question-Answering Capabilities of Code Large Language Models [56.723509505549536]
InfiBench is the first large-scale freeform question-answering (QA) benchmark for code to our knowledge.
It comprises 234 carefully selected high-quality Stack Overflow questions that span across 15 programming languages.
We conduct a systematic evaluation for over 100 latest code LLMs on InfiBench, leading to a series of novel and insightful findings.
arXiv Detail & Related papers (2024-03-11T02:06:30Z) - StepCoder: Improve Code Generation with Reinforcement Learning from
Compiler Feedback [58.20547418182074]
We introduce StepCoder, a novel framework for code generation, consisting of two main components.
CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks.
FGO only optimize the model by masking the unexecuted code segments to provide Fine-Grained Optimization.
Our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks.
arXiv Detail & Related papers (2024-02-02T13:14:31Z) - Using LLM such as ChatGPT for Designing and Implementing a RISC
Processor: Execution,Challenges and Limitations [11.07566083431614]
The paper reviews the associated steps such as parsing, tokenization, encoding, attention mechanism, sampling the tokens and iterations during code generation.
The generated code for the RISC components is verified through testbenches and hardware implementation on a FPGA board.
arXiv Detail & Related papers (2024-01-18T20:14:10Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model [58.127534002232096]
This paper introduces CodeFuse-13B, an open-sourced pre-trained code LLM.
It is specifically designed for code-related tasks with both English and Chinese prompts.
CodeFuse achieves its effectiveness by utilizing a high quality pre-training dataset.
arXiv Detail & Related papers (2023-10-10T02:38:44Z) - LLM4VV: Developing LLM-Driven Testsuite for Compiler Validation [7.979116939578324]
Large language models (LLMs) are a powerful tool for a wide span of applications involving natural language.
We explore the capabilities of state-of-the-art LLMs, including open-source LLMs -- Meta Codellama, Phind fine-tuned version of Codellama, Deepseek Deepseek Coder and closed-source LLMs -- OpenAI GPT-3.5-Turbo and GPT-4-Turbo.
arXiv Detail & Related papers (2023-10-08T01:43:39Z) - Do Large Language Models Pay Similar Attention Like Human Programmers When Generating Code? [10.249771123421432]
We investigate whether Large Language Models (LLMs) attend to the same parts of a task description as human programmers during code generation.
We manually analyzed 211 incorrect code snippets and found five attention patterns that can be used to explain many code generation errors.
Our findings highlight the need for human-aligned LLMs for better interpretability and programmer trust.
arXiv Detail & Related papers (2023-06-02T00:57:03Z) - Analysis of ChatGPT on Source Code [1.3381749415517021]
This paper explores the use of Large Language Models (LLMs) and in particular ChatGPT in programming, source code analysis, and code generation.
LLMs and ChatGPT are built using machine learning and artificial intelligence techniques, and they offer several benefits to developers and programmers.
arXiv Detail & Related papers (2023-06-01T12:12:59Z) - CodeTF: One-stop Transformer Library for State-of-the-art Code LLM [72.1638273937025]
We present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence.
Our library supports a collection of pretrained Code LLM models and popular code benchmarks.
We hope CodeTF is able to bridge the gap between machine learning/generative AI and software engineering.
arXiv Detail & Related papers (2023-05-31T05:24:48Z) - Benchmarking Large Language Models for Automated Verilog RTL Code
Generation [21.747037230069854]
We characterize the ability of large language models (LLMs) to generate useful Verilog.
We construct an evaluation framework comprising test-benches for functional analysis and a flow to test the syntax of Verilog code.
Our findings show that across our problem scenarios, the fine-tuning results in LLMs more capable of producing syntactically correct code.
arXiv Detail & Related papers (2022-12-13T16:34:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.