LLM4SFC: Sequential Function Chart Generation via Large Language Models
- URL: http://arxiv.org/abs/2512.06787v1
- Date: Sun, 07 Dec 2025 11:02:45 GMT
- Title: LLM4SFC: Sequential Function Chart Generation via Large Language Models
- Authors: Ofek Glick, Vladimir Tchuiev, Marah Ghoummaid, Michal Moshkovitz, Dotan Di-Castro,
- Abstract summary: We introduce LLM4SFC, the first framework to receive natural-language descriptions of industrial Function and provide executable SFCs.<n>We evaluate LLM4SFC on a dataset of real-world SFCs from automated manufacturing projects, using both open-source and proprietary LLMs.
- Score: 11.156827035309407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While Large Language Models (LLMs) are increasingly used for synthesizing textual PLC programming languages like Structured Text (ST) code, other IEC 61131-3 standard graphical languages like Sequential Function Charts (SFCs) remain underexplored. Generating SFCs is challenging due to graphical nature and ST actions embedded within, which are not directly compatible with standard generation techniques, often leading to non-executable code that is incompatible with industrial tool-chains In this work, we introduce LLM4SFC, the first framework to receive natural-language descriptions of industrial workflows and provide executable SFCs. LLM4SFC is based on three components: (i) A reduced structured representation that captures essential topology and in-line ST and reduced textual verbosity; (ii) Fine-tuning and few-shot retrieval-augmented generation (RAG) for alignment with SFC programming conventions; and (iii) A structured generation approach that prunes illegal tokens in real-time to ensure compliance with the textual format of SFCs. We evaluate LLM4SFC on a dataset of real-world SFCs from automated manufacturing projects, using both open-source and proprietary LLMs. The results show that LLM4SFC reliably generates syntactically valid SFC programs effectively bridging graphical and textual PLC languages, achieving a generation generation success of 75% - 94%, paving the way for automated industrial programming.
Related papers
- MaDiS: Taming Masked Diffusion Language Models for Sign Language Generation [78.75809158246723]
We present MaDiS, a masked-diffusion-based language model for SLG that captures bidirectional and supports efficient parallel multi-token generation.<n>We also introduce a tri-level cross-modal pretraining scheme that jointly learns from token-, latent-Hearing, and 3D-space objectives.<n>MaDiS achieves superior performance across multiple metrics, including DTW error and two newly introduced metrics, SiBLEU and SiCLIP, while reducing inference latency by nearly 30%.
arXiv Detail & Related papers (2026-01-27T13:06:47Z) - VL-JEPA: Joint Embedding Predictive Architecture for Vision-language [54.86811250366009]
We introduce VL-JEPA, a vision-language model built on a Joint Embedding Predictive Architecture (JEPA)<n>By learning in an abstract representation space, the model focuses on task-relevant semantics while abstracting away surface-level linguistic variability.<n>At inference time, a lightweight text decoder is invoked only when needed to translate VL-JEPA predicted embeddings into text.
arXiv Detail & Related papers (2025-12-11T18:59:22Z) - IFEvalCode: Controlled Code Generation [69.28317223249358]
The paper introduces forward and backward constraints generation to improve the instruction-following capabilities of Code LLMs.<n>The authors present IFEvalCode, a multilingual benchmark comprising 1.6K test samples across seven programming languages.
arXiv Detail & Related papers (2025-07-30T08:08:48Z) - CrossPL: Evaluating Large Language Models on Cross Programming Language Code Generation [24.468767564264738]
We present CrossPL, the first benchmark designed to evaluate large language models' (LLMs) ability to generate cross-programming-language (CPL) code.<n>CrossPL comprises 1,982 tasks centered around IPC, covering six widely-used programming languages and seven representative CPL techniques.<n>We evaluate 14 state-of-the-art general-purpose LLMs and 6 code-oriented LLMs released in the past three years on CrossPL via FSM-based validation.
arXiv Detail & Related papers (2025-07-26T10:28:39Z) - SAFT: Structure-Aware Fine-Tuning of LLMs for AMR-to-Text Generation [50.277959544420455]
SAFT is a structure-aware fine-tuning approach that injects graph topology into pretrained language models.<n>We compute direction-sensitive positional encodings from the magnetic Laplacian of transformed AMRs.<n> SAFT sets a new state-of-the-art on AMR 3.0 with a 3.5 BLEU improvement over baselines.
arXiv Detail & Related papers (2025-07-15T18:12:57Z) - Position Paper: Programming Language Techniques for Bridging LLM Code Generation Semantic Gaps [3.61356888205659]
This paper argues that principled integration of Programming Language techniques is essential for bridging semantic gaps in large language models.<n>PL techniques can elevate LLM-generated code from statistical pattern matching to truly reliable and trustworthy levels.
arXiv Detail & Related papers (2025-07-12T04:32:15Z) - DecoRTL: A Run-time Decoding Framework for RTL Code Generation with LLMs [0.0]
We show that large language models (LLMs) exhibit low confidence in regions of structural ambiguity or semantic complexity.<n>We introduce DecoRTL, a novel run-time decoding strategy, that is both syntax-aware and contrastive for RTL code generation.<n>Our approach operates entirely at inference time without requiring any additional model fine-tuning.
arXiv Detail & Related papers (2025-07-03T01:17:44Z) - AutoPLC: Generating Vendor-Aware Structured Text for Programmable Logic Controllers [9.209415852653386]
AutoPLC is a framework capable of automatically generating vendor-aware ST code from natural language requirements.<n>It is implemented for Siemens TIA Portal and the CODESYS platform.<n>AutoPLC achieves 90%+ compilation success on our 914-task benchmark.
arXiv Detail & Related papers (2024-12-03T12:05:56Z) - Training LLMs for Generating IEC 61131-3 Structured Text with Online Feedback [0.0]
This paper proposes an approach to fine-tune LLMs for the generation of IEC 61131-3 Structured Text (ST) code.<n>The framework is highly suitable for industrial automation applications and outperforms state-of-the-art models.
arXiv Detail & Related papers (2024-10-29T15:54:09Z) - TransLLaMa: LLM-based Simultaneous Translation System [18.27477980076409]
We show that a Decoder-only large language model (LLMs) can control input segmentation directly by generating a special "wait" token.
This obviates the need for a separate policy and enables the LLM to perform English-German and English-Russian SiMT tasks.
We also evaluated closed-source models such as GPT-4, which displayed encouraging results in performing the SiMT task without prior training.
arXiv Detail & Related papers (2024-02-07T07:39:27Z) - Symmetrical Linguistic Feature Distillation with CLIP for Scene Text
Recognition [77.93678598476149]
We establish a novel Symmetrical Linguistic Feature Distillation framework (named CLIP-OCR)
By cascading the CLIP image encoder with the reversed CLIP text encoder, a symmetrical structure is built with an image-to-text feature flow.
Extensive experiments demonstrate the effectiveness of CLIP-OCR with 93.8% average accuracy on six popular STR benchmarks.
arXiv Detail & Related papers (2023-10-08T04:00:20Z) - Gloss-free Sign Language Translation: Improving from Visual-Language
Pretraining [56.26550923909137]
Gloss-Free Sign Language Translation (SLT) is a challenging task due to its cross-domain nature.
We propose a novel Gloss-Free SLT based on Visual-Language Pretraining (GFSLT-)
Our approach involves two stages: (i) integrating Contrastive Language-Image Pre-training with masked self-supervised learning to create pre-tasks that bridge the semantic gap between visual and textual representations and restore masked sentences, and (ii) constructing an end-to-end architecture with an encoder-decoder-like structure that inherits the parameters of the pre-trained Visual and Text Decoder from
arXiv Detail & Related papers (2023-07-27T10:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.