From Tool Calling to Symbolic Thinking: LLMs in a Persistent Lisp Metaprogramming Loop
- URL: http://arxiv.org/abs/2506.10021v1
- Date: Sun, 08 Jun 2025 20:12:06 GMT
- Title: From Tool Calling to Symbolic Thinking: LLMs in a Persistent Lisp Metaprogramming Loop
- Authors: Jordi de la Torre,
- Abstract summary: We propose a novel architecture for integrating large language models (LLMs) with a persistent, interactive Lisp environment.<n>We present a design framework and architectural principles to guide future implementations of interactive AI systems.
- Score: 0.14504054468850663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel architecture for integrating large language models (LLMs) with a persistent, interactive Lisp environment. This setup enables LLMs to define, invoke, and evolve their own tools through programmatic interaction with a live REPL. By embedding Lisp expressions within generation and intercepting them via a middleware layer, the system allows for stateful external memory, reflective programming, and dynamic tool creation. We present a design framework and architectural principles to guide future implementations of interactive AI systems that integrate symbolic programming with neural language generation.
Related papers
- Pel, A Programming Language for Orchestrating AI Agents [1.223779595809275]
Pel is a novel programming language designed to bridge the gap between function/tool calling and direct code generation.<n>Inspired by the strengths of Lisp, Elixir, Gleam, and Haskell, Pel provides a syntactically simple, homoiconic, and semantically rich platform.<n>Key features include a powerful piping mechanism for linear composition, first-class closures enabling easy partial application and functional patterns, built-in support for natural language conditions evaluated by LLMs, and an advanced Read-Eval-Print-Loop (REPeL) with Common Lisp-style restarts and LLM-powered helper agents for automated
arXiv Detail & Related papers (2025-04-03T18:46:53Z) - Statically Contextualizing Large Language Models with Typed Holes [4.180458188910334]
Large language models (LLMs) have reshaped the landscape of program synthesis.
LLMs often hallucinate broken code because they lack appropriate context.
This paper demonstrates that tight integration with the type and binding structure of a language can address this contextualization problem.
arXiv Detail & Related papers (2024-09-02T03:29:00Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - LangSuitE: Planning, Controlling and Interacting with Large Language Models in Embodied Text Environments [70.91258869156353]
We introduce LangSuitE, a versatile and simulation-free testbed featuring 6 representative embodied tasks in textual embodied worlds.
Compared with previous LLM-based testbeds, LangSuitE offers adaptability to diverse environments without multiple simulation engines.
We devise a novel chain-of-thought (CoT) schema, EmMem, which summarizes embodied states w.r.t. history information.
arXiv Detail & Related papers (2024-06-24T03:36:29Z) - Meaning-Typed Programming: Language Abstraction and Runtime for Model-Integrated Applications [8.007302441327214]
This paper presents Meaning-Typed Programming (MTP) model, a novel paradigm that abstracts large language models (LLMs) integration through intuitive language-level constructs.<n>We implement MTP in Jac, a Python superset language, and find that MTP significantly reduces coding complexity while maintaining accuracy and efficiency.<n>For math problems from the GSM8k dataset, MTP achieves accuracy rates approaching 90%, while reducing token usage in 10 out of 13 benchmarks.
arXiv Detail & Related papers (2024-05-14T21:12:01Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - LILO: Learning Interpretable Libraries by Compressing and Documenting Code [71.55208585024198]
We introduce LILO, a neurosymbolic framework that iteratively synthesizes, compresses, and documents code.
LILO combines LLM-guided program synthesis with recent algorithmic advances in automated from Stitch.
We find that AutoDoc boosts performance by helping LILO's synthesizer to interpret and deploy learned abstractions.
arXiv Detail & Related papers (2023-10-30T17:55:02Z) - Low-code LLM: Graphical User Interface over Large Language Models [115.08718239772107]
This paper introduces a novel human-LLM interaction framework, Low-code LLM.
It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses.
We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability.
arXiv Detail & Related papers (2023-04-17T09:27:40Z) - Neuro-Symbolic Causal Language Planning with Commonsense Prompting [67.06667162430118]
Language planning aims to implement complex high-level goals by decomposition into simpler low-level steps.
Previous methods require either manual exemplars or annotated programs to acquire such ability from large language models.
This paper proposes Neuro-Symbolic Causal Language Planner (CLAP) that elicits procedural knowledge from the LLMs with commonsense-infused prompting.
arXiv Detail & Related papers (2022-06-06T22:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.