Structured Program Synthesis using LLMs: Results and Insights from the IPARC Challenge
- URL: http://arxiv.org/abs/2506.13820v1
- Date: Sun, 15 Jun 2025 04:33:00 GMT
- Title: Structured Program Synthesis using LLMs: Results and Insights from the IPARC Challenge
- Authors: Shraddha Surana, Ashwin Srinivasan, Michael Bain,
- Abstract summary: The IPARC Challenge, inspired by ARC, provides controlled program synthesis tasks over synthetic images.<n>This paper presents a structured inductive programming approach with LLMs that successfully solves tasks across all IPARC categories.
- Score: 1.4591178662983573
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The IPARC Challenge, inspired by ARC, provides controlled program synthesis tasks over synthetic images to evaluate automatic program construction, focusing on sequence, selection, and iteration. This set of 600 tasks has resisted automated solutions. This paper presents a structured inductive programming approach with LLMs that successfully solves tasks across all IPARC categories. The controlled nature of IPARC reveals insights into LLM-based code generation, including the importance of prior structuring, LLMs' ability to aid structuring (requiring human refinement), the need to freeze correct code, the efficiency of code reuse, and how LLM-generated code can spark human creativity. These findings suggest valuable mechanisms for human-LLM collaboration in tackling complex program synthesis.
Related papers
- LLM-Guided Compositional Program Synthesis [16.867355177975387]
Large language models (LLMs) have the ability to solve PBE tasks by generating code in different target languages, but they can fail unpredictably.<n>We introduce a novel technique that recovers from failure by constructing simpler subtasks for the LLM to solve.
arXiv Detail & Related papers (2025-03-12T00:36:43Z) - An Autonomous Network Orchestration Framework Integrating Large Language Models with Continual Reinforcement Learning [13.3347292702828]
This paper proposes a framework called Autonomous Reinforcement Coordination (ARC) for a SemCom-enabled SAGIN.<n>ARC decomposes orchestration into two tiers, utilizing LLMs for high-level planning and RL agents for low-level decision-making.
arXiv Detail & Related papers (2025-02-22T11:53:34Z) - Interactive and Expressive Code-Augmented Planning with Large Language Models [62.799579304821826]
Large Language Models (LLMs) demonstrate strong abilities in common-sense reasoning and interactive decision-making.
Recent techniques have sought to structure LLM outputs using control flow and other code-adjacent techniques to improve planning performance.
We propose REPL-Plan, an LLM planning approach that is fully code-expressive and dynamic.
arXiv Detail & Related papers (2024-11-21T04:23:17Z) - CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language Models [68.64605538559312]
In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives.
Inspired by our findings, we propose a measurement to quantitatively evaluate the learning balance.
In addition, we introduce an auxiliary loss regularization method to promote updating of the generation distribution of MLLMs.
arXiv Detail & Related papers (2024-07-29T23:18:55Z) - Genetic Instruct: Scaling up Synthetic Generation of Coding Instructions for Large Language Models [59.60208063956459]
Large Language Models (LLMs) require high quality instruction data for effective alignment.<n>We present Genetic-Instruct, a scalable algorithm for synthesizing large-scale, high quality coding instructions.
arXiv Detail & Related papers (2024-07-29T20:42:59Z) - On the Design and Analysis of LLM-Based Algorithms [74.7126776018275]
Large language models (LLMs) are used as sub-routines in algorithms.
LLMs have achieved remarkable empirical success.
Our proposed framework holds promise for advancing LLM-based algorithms.
arXiv Detail & Related papers (2024-07-20T07:39:07Z) - LLM-ARC: Enhancing LLMs with an Automated Reasoning Critic [2.1073328551105623]
We introduce LLM-ARC, a neuro-symbolic framework designed to enhance the logical reasoning capabilities of Large Language Models (LLMs)
LLMs-ARC employs an Actor-Critic method where the LLM Actor generates declarative logic programs along with tests for semantic correctness, while the Automated Reasoning Critic evaluates the code, runs the tests and provides feedback on test failures for iterative refinement.
Our experiments demonstrate significant improvements over LLM-only baselines, highlighting the importance of logic test generation and iterative self-refinement.
arXiv Detail & Related papers (2024-06-25T15:52:15Z) - Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning [53.6472920229013]
Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks.
LLMs are prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning.
We introduce Q*, a framework for guiding LLMs decoding process with deliberative planning.
arXiv Detail & Related papers (2024-06-20T13:08:09Z) - When Large Language Models Meet Optical Networks: Paving the Way for Automation [17.4503217818141]
We propose a framework of LLM-empowered optical networks, facilitating intelligent control of the physical layer and efficient interaction with the application layer.
The proposed framework is verified on two typical tasks: network alarm analysis and network performance optimization.
The good response accuracies and sematic similarities of 2,400 test situations exhibit the great potential of LLM in optical networks.
arXiv Detail & Related papers (2024-05-14T10:46:33Z) - Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing [56.75702900542643]
We introduce AlphaLLM for the self-improvements of Large Language Models.<n>It integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop.<n>Our experimental results show that AlphaLLM significantly enhances the performance of LLMs without additional annotations.
arXiv Detail & Related papers (2024-04-18T15:21:34Z) - An Embarrassingly Simple Approach for LLM with Strong ASR Capacity [56.30595787061546]
We focus on solving one of the most important tasks in the field of speech processing, with speech foundation encoders and large language models (LLM)
Recent works have complex designs such as compressing the output temporally for the speech encoder, tackling modal alignment for the projector, and utilizing parameter-efficient fine-tuning for the LLM.
We found that delicate designs are not necessary, while an embarrassingly simple composition of off-the-shelf speech encoder, LLM, and the only trainable linear projector is competent for the ASR task.
arXiv Detail & Related papers (2024-02-13T23:25:04Z) - ANPL: Towards Natural Programming with Interactive Decomposition [33.58825633046242]
We introduce an interactive ANPL system that ensures users can always refine the generated code.
An ANPL program consists of a set of input-outputs that it must satisfy.
The user revises an ANPL program by either modifying the sketch, changing the language used to describe the holes, or providing additional input-outputs to a particular hole.
arXiv Detail & Related papers (2023-05-29T14:19:40Z) - Low-code LLM: Graphical User Interface over Large Language Models [115.08718239772107]
This paper introduces a novel human-LLM interaction framework, Low-code LLM.
It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses.
We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability.
arXiv Detail & Related papers (2023-04-17T09:27:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.