Act-Then-Measure: Reinforcement Learning for Partially Observable
Environments with Active Measuring
- URL: http://arxiv.org/abs/2303.08271v1
- Date: Tue, 14 Mar 2023 23:22:32 GMT
- Title: Act-Then-Measure: Reinforcement Learning for Partially Observable
Environments with Active Measuring
- Authors: Merlijn Krale, Thiago D. Sim\~ao, Nils Jansen
- Abstract summary: We study Markov decision processes (MDPs), where agents have direct control over when and how they gather information.
In these models, actions consist of two components: a control action that affects the environment, and a measurement action that affects what the agent can observe.
We show how following this assumption may lead to shorter policy times and prove a bound on the performance loss incurred by the computation.
- Score: 4.033107207078282
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study Markov decision processes (MDPs), where agents have direct control
over when and how they gather information, as formalized by action-contingent
noiselessly observable MDPs (ACNO-MPDs). In these models, actions consist of
two components: a control action that affects the environment, and a
measurement action that affects what the agent can observe. To solve ACNO-MDPs,
we introduce the act-then-measure (ATM) heuristic, which assumes that we can
ignore future state uncertainty when choosing control actions. We show how
following this heuristic may lead to shorter policy computation times and prove
a bound on the performance loss incurred by the heuristic. To decide whether or
not to take a measurement action, we introduce the concept of measuring value.
We develop a reinforcement learning algorithm based on the ATM heuristic, using
a Dyna-Q variant adapted for partially observable domains, and showcase its
superior performance compared to prior methods on a number of
partially-observable environments.
Related papers
- R-AIF: Solving Sparse-Reward Robotic Tasks from Pixels with Active Inference and World Models [50.19174067263255]
We introduce prior preference learning techniques and self-revision schedules to help the agent excel in sparse-reward, continuous action, goal-based robotic control POMDP environments.
We show that our agents offer improved performance over state-of-the-art models in terms of cumulative rewards, relative stability, and success rate.
arXiv Detail & Related papers (2024-09-21T18:32:44Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Learning Action-based Representations Using Invariance [18.1941237781348]
We introduce action-bisimulation encoding, which learns a multi-step controllability metric that discounts distant state features that are relevant for control.
We demonstrate that action-bisimulation pretraining on reward-free, uniformly random data improves sample efficiency in several environments.
arXiv Detail & Related papers (2024-03-25T02:17:54Z) - Robust Active Measuring under Model Uncertainty [11.087930299233278]
Partial observability and uncertainty are common problems in sequential decision-making.
We present an active-measure to solve RAM-MDPs efficiently and show that model uncertainty can, counterintuitively, let agents take fewer measurements.
arXiv Detail & Related papers (2023-12-18T14:21:35Z) - Expert-Guided Symmetry Detection in Markov Decision Processes [0.0]
We propose a paradigm that aims to detect the presence of some transformations of the state-action space for which the MDP dynamics is invariant.
The results show that the model distributional shift is reduced when the dataset is augmented with the data obtained by using the detected symmetries.
arXiv Detail & Related papers (2021-11-19T16:12:30Z) - Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in
Partially Observed Markov Decision Processes [65.91730154730905]
In applications of offline reinforcement learning to observational data, such as in healthcare or education, a general concern is that observed actions might be affected by unobserved factors.
Here we tackle this by considering off-policy evaluation in a partially observed Markov decision process (POMDP)
We extend the framework of proximal causal inference to our POMDP setting, providing a variety of settings where identification is made possible.
arXiv Detail & Related papers (2021-10-28T17:46:14Z) - Rule-based Shielding for Partially Observable Monte-Carlo Planning [78.05638156687343]
We propose two contributions to Partially Observable Monte-Carlo Planning (POMCP)
The first is a method for identifying unexpected actions selected by POMCP with respect to expert prior knowledge of the task.
The second is a shielding approach that prevents POMCP from selecting unexpected actions.
We evaluate our approach on Tiger, a standard benchmark for POMDPs, and a real-world problem related to velocity regulation in mobile robot navigation.
arXiv Detail & Related papers (2021-04-28T14:23:38Z) - Instance-Aware Predictive Navigation in Multi-Agent Environments [93.15055834395304]
We propose an Instance-Aware Predictive Control (IPC) approach, which forecasts interactions between agents as well as future scene structures.
We adopt a novel multi-instance event prediction module to estimate the possible interaction among agents in the ego-centric view.
We design a sequential action sampling strategy to better leverage predicted states on both scene-level and instance-level.
arXiv Detail & Related papers (2021-01-14T22:21:25Z) - Exploiting Submodular Value Functions For Scaling Up Active Perception [60.81276437097671]
In active perception tasks, agent aims to select sensory actions that reduce uncertainty about one or more hidden variables.
Partially observable Markov decision processes (POMDPs) provide a natural model for such problems.
As the number of sensors available to the agent grows, the computational cost of POMDP planning grows exponentially.
arXiv Detail & Related papers (2020-09-21T09:11:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.