Toward Learning POMDPs Beyond Full-Rank Actions and State Observability
- URL: http://arxiv.org/abs/2601.18930v3
- Date: Tue, 03 Feb 2026 17:03:44 GMT
- Title: Toward Learning POMDPs Beyond Full-Rank Actions and State Observability
- Authors: Seiji Shaw, Travis Manderson, Chad Kessens, Nicholas Roy,
- Abstract summary: We show how to learn the parameters of a discrete Partially Observable Markov Decision Process.<n>The agent begins with knowledge of the POMDP's actions and observation spaces, but not its state space, transitions, or observation models.<n>Our experiments suggest that explicit observation and transition likelihoods can be leveraged to generate new plans for different goals.
- Score: 19.109390901181488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We are interested in enabling autonomous agents to learn and reason about systems with hidden states, such as locking mechanisms. We cast this problem as learning the parameters of a discrete Partially Observable Markov Decision Process (POMDP). The agent begins with knowledge of the POMDP's actions and observation spaces, but not its state space, transitions, or observation models. These properties must be constructed from a sequence of actions and observations. Spectral approaches to learning models of partially observable domains, such as Predictive State Representations (PSRs), learn representations of state that are sufficient to predict future outcomes. PSR models, however, do not have explicit transition and observation system models that can be used with different reward functions to solve different planning problems. Under a mild set of rankness assumptions on the products of transition and observation matrices, we show how PSRs learn POMDP matrices up to a similarity transform, and this transform may be estimated via tensor decomposition methods. Our method learns observation matrices and transition matrices up to a partition of states, where the states in a single partition have the same observation distributions corresponding to actions whose transition matrices are full-rank. Our experiments suggest that explicit observation and transition likelihoods can be leveraged to generate new plans for different goals and reward functions after the model has been learned. We also show that learning a POMDP beyond a partition of states is impossible from sequential data by constructing two POMDPs that agree on all observation distributions but differ in their transition dynamics.
Related papers
- Reinforcement learning based data assimilation for unknown state model [3.032674692886751]
We propose a novel method that integrates reinforcement learning with ensemble-based Bayesian ffltering methods.<n>The proposed framework accommodates a wide range of observation scenarios, including nonlinear and partially observed measurement models.<n>A few numerical examples demonstrate that the proposed method achieves superior accuracy and robustness in high-dimensional settings.
arXiv Detail & Related papers (2025-11-04T05:58:37Z) - Reinforcement Learning with Action-Triggered Observations [46.88582659499577]
Action-Triggered Sporadically Traceable Markov Decision Processes (ATST-MDPs)<n>This framework is formulated as Action-Triggered Sporadically Traceable Markov Decision Processes (ATST-MDPs)<n>We introduce the action-sequence learning paradigm in which agents commit to executing a sequence of actions until the next observation arrives.
arXiv Detail & Related papers (2025-10-02T16:00:50Z) - Decomposing Behavioral Phase Transitions in LLMs: Order Parameters for Emergent Misalignment [0.0]
Fine-tuning LLMs on narrowly harmful datasets can lead to behavior that is broadly misaligned with respect to human values.<n>We develop a comprehensive framework for detecting and characterizing rapid transitions during fine-tuning.<n>Our framework enables the automated discovery and quantification of language-based order parameters.
arXiv Detail & Related papers (2025-08-27T16:19:49Z) - View From Above: A Framework for Evaluating Distribution Shifts in Model Behavior [0.9043709769827437]
Large language models (LLMs) are asked to perform certain tasks.
How can we be sure that their learned representations align with reality?
We propose a domain-agnostic framework for systematically evaluating distribution shifts.
arXiv Detail & Related papers (2024-07-01T04:07:49Z) - Foundation Inference Models for Markov Jump Processes [3.005912045854039]
We introduce a methodology for zero-shot inference of Markov jump processes (MJPs) on bounded state spaces.<n>One and the same (pretrained) model can infer, in a zero-shot fashion, hidden MJPs evolving in state spaces of different dimensionalities.
arXiv Detail & Related papers (2024-06-10T16:12:00Z) - A Poisson-Gamma Dynamic Factor Model with Time-Varying Transition Dynamics [51.147876395589925]
A non-stationary PGDS is proposed to allow the underlying transition matrices to evolve over time.
A fully-conjugate and efficient Gibbs sampler is developed to perform posterior simulation.
Experiments show that, in comparison with related models, the proposed non-stationary PGDS achieves improved predictive performance.
arXiv Detail & Related papers (2024-02-26T04:39:01Z) - Learning Interpretable Policies in Hindsight-Observable POMDPs through
Partially Supervised Reinforcement Learning [57.67629402360924]
We introduce the Partially Supervised Reinforcement Learning (PSRL) framework.
At the heart of PSRL is the fusion of both supervised and unsupervised learning.
We show that PSRL offers a potent balance, enhancing model interpretability while preserving, and often significantly outperforming, the performance benchmarks set by traditional methods.
arXiv Detail & Related papers (2024-02-14T16:23:23Z) - Sample-Efficient Learning of POMDPs with Multiple Observations In
Hindsight [105.6882315781987]
This paper studies the sample-efficiency of learning in Partially Observable Markov Decision Processes (POMDPs)
Motivated by real-world settings such as loading in game playing, we propose an enhanced feedback model called multiple observations in hindsight''
We show that sample-efficient learning is possible for two new subclasses of POMDPs: emphmulti-observation revealing POMDPs and emphdistinguishable POMDPs
arXiv Detail & Related papers (2023-07-06T09:39:01Z) - Particle-Based Score Estimation for State Space Model Learning in
Autonomous Driving [62.053071723903834]
Multi-object state estimation is a fundamental problem for robotic applications.
We consider learning maximum-likelihood parameters using particle methods.
We apply our method to real data collected from autonomous vehicles.
arXiv Detail & Related papers (2022-12-14T01:21:05Z) - Unsupervised representation learning with recognition-parametrised
probabilistic models [12.865596223775649]
We introduce a new approach to probabilistic unsupervised learning based on the recognition-parametrised model ( RPM)
Under the key assumption that observations are conditionally independent given latents, the RPM combines parametric prior observation-conditioned latent distributions with non-parametric observationfactors.
The RPM provides a powerful framework to discover meaningful latent structure underlying observational data, a function critical to both animal and artificial intelligence.
arXiv Detail & Related papers (2022-09-13T00:33:21Z) - PAC Reinforcement Learning for Predictive State Representations [60.00237613646686]
We study online Reinforcement Learning (RL) in partially observable dynamical systems.
We focus on the Predictive State Representations (PSRs) model, which is an expressive model that captures other well-known models.
We develop a novel model-based algorithm for PSRs that can learn a near optimal policy in sample complexity scalingly.
arXiv Detail & Related papers (2022-07-12T17:57:17Z) - Provably Efficient Reinforcement Learning in Partially Observable
Dynamical Systems [97.12538243736705]
We study Reinforcement Learning for partially observable dynamical systems using function approximation.
We propose a new textitPartially Observable Bilinear Actor-Critic framework, that is general enough to include models such as POMDPs, observable Linear-Quadratic-Gaussian (LQG), Predictive State Representations (PSRs), as well as a newly introduced model Hilbert Space Embeddings of POMDPs and observable POMDPs with latent low-rank transition.
arXiv Detail & Related papers (2022-06-24T00:27:42Z) - Expert-Guided Symmetry Detection in Markov Decision Processes [0.0]
We propose a paradigm that aims to detect the presence of some transformations of the state-action space for which the MDP dynamics is invariant.
The results show that the model distributional shift is reduced when the dataset is augmented with the data obtained by using the detected symmetries.
arXiv Detail & Related papers (2021-11-19T16:12:30Z) - Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in
Partially Observed Markov Decision Processes [65.91730154730905]
In applications of offline reinforcement learning to observational data, such as in healthcare or education, a general concern is that observed actions might be affected by unobserved factors.
Here we tackle this by considering off-policy evaluation in a partially observed Markov decision process (POMDP)
We extend the framework of proximal causal inference to our POMDP setting, providing a variety of settings where identification is made possible.
arXiv Detail & Related papers (2021-10-28T17:46:14Z) - On Contrastive Representations of Stochastic Processes [53.21653429290478]
Learning representations of processes is an emerging problem in machine learning.
We show that our methods are effective for learning representations of periodic functions, 3D objects and dynamical processes.
arXiv Detail & Related papers (2021-06-18T11:00:24Z) - Plannable Approximations to MDP Homomorphisms: Equivariance under
Actions [72.30921397899684]
We introduce a contrastive loss function that enforces action equivariance on the learned representations.
We prove that when our loss is zero, we have a homomorphism of a deterministic Markov Decision Process.
We show experimentally that for deterministic MDPs, the optimal policy in the abstract MDP can be successfully lifted to the original MDP.
arXiv Detail & Related papers (2020-02-27T08:29:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.