A-LAMP: Agentic LLM-Based Framework for Automated MDP Modeling and Policy Generation
- URL: http://arxiv.org/abs/2512.11270v1
- Date: Fri, 12 Dec 2025 04:21:17 GMT
- Title: A-LAMP: Agentic LLM-Based Framework for Automated MDP Modeling and Policy Generation
- Authors: Hong Je-Gal, Chan-Bin Yi, Hyun-Suk Lee,
- Abstract summary: We introduce an agentic large language model (LLM)-based framework for automated MDP modeling and policy generation (A-LAMP)<n>A-LAMP translates free-form natural language task descriptions into an MDP formulation and trained policy.<n>A-LAMP consistently achieves higher policy generation capability than a single state-of-the-art model.
- Score: 2.5705703401045548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Applying reinforcement learning (RL) to real-world tasks requires converting informal descriptions into a formal Markov decision process (MDP), implementing an executable environment, and training a policy agent. Automating this process is challenging due to modeling errors, fragile code, and misaligned objectives, which often impede policy training. We introduce an agentic large language model (LLM)-based framework for automated MDP modeling and policy generation (A-LAMP), that automatically translates free-form natural language task descriptions into an MDP formulation and trained policy. The framework decomposes modeling, coding, and training into verifiable stages, ensuring semantic alignment throughout the pipeline. Across both classic control and custom RL domains, A-LAMP consistently achieves higher policy generation capability than a single state-of-the-art LLM model. Notably, even its lightweight variant, which is built on smaller language models, approaches the performance of much larger models. Failure analysis reveals why these improvements occur. In addition, a case study also demonstrates that A-LAMP generates environments and policies that preserve the task's optimality, confirming its correctness and reliability.
Related papers
- Reinforcement World Model Learning for LLM-based Agents [60.65003139516272]
Reinforcement World Model Learning (RWML) is a self-conditioned method that learns action-supervised world models for LLM-based agents.<n>Our method aligns simulated next states produced by the model with realized next states observed from the environment.<n>We evaluate our method on ALFWorld and $2$ Bench and observe significant gains over the base model, despite being entirely self-supervised.
arXiv Detail & Related papers (2026-02-05T16:30:08Z) - Policy-Conditioned Policies for Multi-Agent Task Solving [53.67744322553693]
In this work, we propose a paradigm shift that bridges the gap by representing policies as human-interpretable source code.<n>We reformulate the learning problem by utilizing Large Language Models (LLMs) as approximate interpreters.<n>We formalize this process as textitProgrammatic Iterated Best Response (PIBR), an algorithm where the policy code is optimized by textual gradients.
arXiv Detail & Related papers (2025-12-24T07:42:10Z) - Automated Generation of MDPs Using Logic Programming and LLMs for Robotic Applications [12.212215896242911]
We present a novel framework that integrates Large Language Models (LLMs) with automated planning and formal verification.<n>We validate the framework in three human-robot interaction scenarios, demonstrating its ability to produce executable policies with minimal manual effort.
arXiv Detail & Related papers (2025-11-28T12:48:30Z) - A Fuzzy Logic Prompting Framework for Large Language Models in Adaptive and Uncertain Tasks [2.1756081703276]
We introduce a modular prompting framework that supports safer and more adaptive use of large language models (LLMs) across dynamic, user-centered tasks.<n>Our method combines a natural language boundary prompt with a control schema encoded with fuzzy scaffolding logic and adaptation rules.<n>In a simulated intelligent tutoring setting, the framework improves scaffolding quality, adaptivity, and instructional alignment across multiple models, outperforming standard prompting baselines.
arXiv Detail & Related papers (2025-08-08T23:50:48Z) - LLM-Guided Reinforcement Learning: Addressing Training Bottlenecks through Policy Modulation [7.054214377609925]
Reinforcement learning (RL) has achieved notable success in various domains.<n>Training effective policies for complex tasks remains challenging.<n>Existing approaches to mitigate training bottlenecks fall into two categories.
arXiv Detail & Related papers (2025-05-27T03:40:02Z) - MLE-Dojo: Interactive Environments for Empowering LLM Agents in Machine Learning Engineering [57.156093929365255]
Gym-style framework for systematically reinforcement learning, evaluating, and improving autonomous large language model (LLM) agents.<n>MLE-Dojo covers diverse, open-ended MLE tasks carefully curated to reflect realistic engineering scenarios.<n>Its fully executable environment supports comprehensive agent training via both supervised fine-tuning and reinforcement learning.
arXiv Detail & Related papers (2025-05-12T17:35:43Z) - Improving Controller Generalization with Dimensionless Markov Decision Processes [6.047438841182958]
We propose a Model-Based approach to increase generalization where both world model and policy are trained in a dimensionless state-action space.<n>We demonstrate the applicability of our method on simulated actuated pendulum and cartpole systems, where policies trained on a single environment are robust to shifts in the distribution of the context.
arXiv Detail & Related papers (2025-04-14T09:08:53Z) - Scaling Autonomous Agents via Automatic Reward Modeling And Planning [52.39395405893965]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of tasks.<n>However, they still struggle with problems requiring multi-step decision-making and environmental feedback.<n>We propose a framework that can automatically learn a reward model from the environment without human annotations.
arXiv Detail & Related papers (2025-02-17T18:49:25Z) - Robust Model-Based Reinforcement Learning with an Adversarial Auxiliary Model [2.9109581496560044]
An RL agent that trains in a certain Markov decision process (MDP) often struggles to perform well in nearly identical MDPs.
We employ the framework of Robust MDPs in a model-based setting and introduce a novel learned transition model.
Our experimental results indicate a notable improvement in policy robustness on high-dimensional MuJoCo control tasks.
arXiv Detail & Related papers (2024-06-14T12:37:08Z) - Entropy-Regularized Token-Level Policy Optimization for Language Agent Reinforcement [67.1393112206885]
Large Language Models (LLMs) have shown promise as intelligent agents in interactive decision-making tasks.
We introduce Entropy-Regularized Token-level Policy Optimization (ETPO), an entropy-augmented RL method tailored for optimizing LLMs at the token level.
We assess the effectiveness of ETPO within a simulated environment that models data science code generation as a series of multi-step interactive tasks.
arXiv Detail & Related papers (2024-02-09T07:45:26Z) - Modular Deep Reinforcement Learning for Continuous Motion Planning with
Temporal Logic [59.94347858883343]
This paper investigates the motion planning of autonomous dynamical systems modeled by Markov decision processes (MDP)
The novelty is to design an embedded product MDP (EP-MDP) between the LDGBA and the MDP.
The proposed LDGBA-based reward shaping and discounting schemes for the model-free reinforcement learning (RL) only depend on the EP-MDP states.
arXiv Detail & Related papers (2021-02-24T01:11:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.