Learning Symbolic Persistent Macro-Actions for POMDP Solving Over Time
- URL: http://arxiv.org/abs/2505.03668v1
- Date: Tue, 06 May 2025 16:08:55 GMT
- Title: Learning Symbolic Persistent Macro-Actions for POMDP Solving Over Time
- Authors: Celeste Veronese, Daniele Meli, Alessandro Farinelli,
- Abstract summary: This paper proposes an integration of temporal logical reasoning and Partially Observable Markov Decision Processes (POMDPs)<n>Our method leverages a fragment of Linear Temporal Logic (LTL) based on Event Calculus (EC) to generate emphpersistent (i.e., constant) macro-actions.<n>These macro-actions guide Monte Carlo Tree Search (MCTS)-based POMDP solvers over a time horizon.
- Score: 52.03682298194168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes an integration of temporal logical reasoning and Partially Observable Markov Decision Processes (POMDPs) to achieve interpretable decision-making under uncertainty with macro-actions. Our method leverages a fragment of Linear Temporal Logic (LTL) based on Event Calculus (EC) to generate \emph{persistent} (i.e., constant) macro-actions, which guide Monte Carlo Tree Search (MCTS)-based POMDP solvers over a time horizon, significantly reducing inference time while ensuring robust performance. Such macro-actions are learnt via Inductive Logic Programming (ILP) from a few traces of execution (belief-action pairs), thus eliminating the need for manually designed heuristics and requiring only the specification of the POMDP transition model. In the Pocman and Rocksample benchmark scenarios, our learned macro-actions demonstrate increased expressiveness and generality when compared to time-independent heuristics, indeed offering substantial computational efficiency improvements.
Related papers
- DASH: Input-Aware Dynamic Layer Skipping for Efficient LLM Inference with Markov Decision Policies [22.562212737269924]
textbfDASH dynamically selects paths conditioned on input characteristics.<n> compensation mechanism injects differential rewards into the decision process.<n>Asynchronous execution strategy overlaps layer computation with policy evaluation to minimize runtime overhead.
arXiv Detail & Related papers (2025-05-23T03:10:11Z) - DEL-ToM: Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic [28.54147281933252]
Theory-of-Mind (ToM) tasks pose a unique challenge for small language models (SLMs) with limited scale.<n>We propose DEL-ToM, a framework that improves ToM reasoning through inference-time scaling.
arXiv Detail & Related papers (2025-05-22T23:52:56Z) - Computational Reasoning of Large Language Models [51.629694188014064]
We introduce textbfTuring Machine Bench, a benchmark to assess the ability of Large Language Models (LLMs) to execute reasoning processes.<n> TMBench incorporates four key features: self-contained and knowledge-agnostic reasoning, a minimalistic multi-step structure, controllable difficulty, and a theoretical foundation based on Turing machine.
arXiv Detail & Related papers (2025-04-29T13:52:47Z) - Scalable Decision-Making in Stochastic Environments through Learned Temporal Abstraction [7.918703013303246]
We present Latent Macro Action Planner (L-MAP), which addresses the challenge of learning to make decisions in high-dimensional continuous action spaces.<n>L-MAP learns a set of temporally extended macro-actions through a state-conditional Vector Quantized Variational Autoencoder (VQ-VAE)<n>In offline RL settings, including continuous control tasks, L-MAP efficiently searches over discrete latent actions to yield high expected returns.
arXiv Detail & Related papers (2025-02-28T16:02:23Z) - Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization [49.362750475706235]
Reinforcement Learning (RL) plays a crucial role in aligning large language models with human preferences and improving their ability to perform complex tasks.<n>We introduce Direct Q-function Optimization (DQO), which formulates the response generation process as a Markov Decision Process (MDP) and utilizes the soft actor-critic (SAC) framework to optimize a Q-function directly parameterized by the language model.<n> Experimental results on two math problem-solving datasets, GSM8K and MATH, demonstrate that DQO outperforms previous methods, establishing it as a promising offline reinforcement learning approach for aligning language models.
arXiv Detail & Related papers (2024-10-11T23:29:20Z) - Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification [76.14641982122696]
We propose a constraint learning schema for fine-tuning Large Language Models (LLMs) with attribute control.
We show that our approach leads to an LLM that produces fewer inappropriate responses while achieving competitive performance on benchmarks and a toxicity detection task.
arXiv Detail & Related papers (2024-10-07T23:38:58Z) - Near-Optimal Learning and Planning in Separated Latent MDPs [70.88315649628251]
We study computational and statistical aspects of learning Latent Markov Decision Processes (LMDPs)
In this model, the learner interacts with an MDP drawn at the beginning of each epoch from an unknown mixture of MDPs.
arXiv Detail & Related papers (2024-06-12T06:41:47Z) - Learning Logic Specifications for Policy Guidance in POMDPs: an
Inductive Logic Programming Approach [57.788675205519986]
We learn high-quality traces from POMDP executions generated by any solver.
We exploit data- and time-efficient Indu Logic Programming (ILP) to generate interpretable belief-based policy specifications.
We show that learneds expressed in Answer Set Programming (ASP) yield performance superior to neural networks and similar to optimal handcrafted task-specifics within lower computational time.
arXiv Detail & Related papers (2024-02-29T15:36:01Z) - Revisiting State Augmentation methods for Reinforcement Learning with
Stochastic Delays [10.484851004093919]
This paper formally describes the notion of Markov Decision Processes (MDPs) with delays.
We show that delayed MDPs can be transformed into equivalent standard MDPs (without delays) with significantly simplified cost structure.
We employ this equivalence to derive a model-free Delay-Resolved RL framework and show that even a simple RL algorithm built upon this framework achieves near-optimal rewards in environments with delays in actions and observations.
arXiv Detail & Related papers (2021-08-17T10:45:55Z) - Modular Deep Reinforcement Learning for Continuous Motion Planning with
Temporal Logic [59.94347858883343]
This paper investigates the motion planning of autonomous dynamical systems modeled by Markov decision processes (MDP)
The novelty is to design an embedded product MDP (EP-MDP) between the LDGBA and the MDP.
The proposed LDGBA-based reward shaping and discounting schemes for the model-free reinforcement learning (RL) only depend on the EP-MDP states.
arXiv Detail & Related papers (2021-02-24T01:11:25Z) - MAGIC: Learning Macro-Actions for Online POMDP Planning [14.156697390568617]
MAGIC learns a macro-action generator end-to-end, using an online planner's performance as the feedback.
We evaluate MAGIC on several long-horizon planning tasks both in simulation and on a real robot.
arXiv Detail & Related papers (2020-11-07T17:18:45Z) - Meta Learning in the Continuous Time Limit [36.23467808322093]
We establish the ordinary differential equation (ODE) that underlies the training dynamics of Model-A Meta-Learning (MAML)
We propose a new BI-MAML training algorithm that significantly reduces the computational burden associated with existing MAML training methods.
arXiv Detail & Related papers (2020-06-19T01:47:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.