On the Role of Information Structure in Reinforcement Learning for Partially-Observable Sequential Teams and Games
- URL: http://arxiv.org/abs/2403.00993v2
- Date: Mon, 27 May 2024 22:19:40 GMT
- Title: On the Role of Information Structure in Reinforcement Learning for Partially-Observable Sequential Teams and Games
- Authors: Awni Altabaa, Zhuoran Yang,
- Abstract summary: In a sequential decision-making problem, the information structure is the description of how events in the system occurring at different points in time affect each other.
By contrast, real-world sequential decision-making problems typically involve a complex and time-varying interdependence of system variables.
We formalize a novel reinforcement learning model which explicitly represents the information structure.
- Score: 55.2480439325792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a sequential decision-making problem, the information structure is the description of how events in the system occurring at different points in time affect each other. Classical models of reinforcement learning (e.g., MDPs, POMDPs) assume a simple and highly regular information structure, while more general models like predictive state representations do not explicitly model the information structure. By contrast, real-world sequential decision-making problems typically involve a complex and time-varying interdependence of system variables, requiring a rich and flexible representation of information structure. In this paper, we formalize a novel reinforcement learning model which explicitly represents the information structure. We then use this model to carry out an information-structural analysis of the statistical hardness of general sequential decision-making problems, obtaining a characterization via a graph-theoretic quantity of the DAG representation of the information structure. We prove an upper bound on the sample complexity of learning a general sequential decision-making problem in terms of its information structure by exhibiting an algorithm achieving the upper bound. This recovers known tractability results and gives a novel perspective on reinforcement learning in general sequential decision-making problems, providing a systematic way of identifying new tractable classes of problems.
Related papers
- Structure Learning and Parameter Estimation for Graphical Models via
Penalized Maximum Likelihood Methods [0.0]
In the thesis, we consider two different types of PGMs: Bayesian networks (BNs) which are static, and continuous time Bayesian networks which, as the name suggests, have a temporal component.
We are interested in recovering their true structure, which is the first step in learning any PGM.
arXiv Detail & Related papers (2023-01-30T20:26:13Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Stateful Offline Contextual Policy Evaluation and Learning [88.9134799076718]
We study off-policy evaluation and learning from sequential data.
We formalize the relevant causal structure of problems such as dynamic personalized pricing.
We show improved out-of-sample policy performance in this class of relevant problems.
arXiv Detail & Related papers (2021-10-19T16:15:56Z) - Meta-learning using privileged information for dynamics [66.32254395574994]
We extend the Neural ODE Process model to use additional information within the Learning Using Privileged Information setting.
We validate our extension with experiments showing improved accuracy and calibration on simulated dynamics tasks.
arXiv Detail & Related papers (2021-04-29T12:18:02Z) - Learning Structured Latent Factors from Dependent Data:A Generative
Model Framework from Information-Theoretic Perspective [18.88255368184596]
We present a novel framework for learning generative models with various underlying structures in the latent space.
Our model provides a principled approach to learn a set of semantically meaningful latent factors that reflect various types of desired structures.
arXiv Detail & Related papers (2020-07-21T06:59:29Z) - Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification [50.83222170524406]
We study the intrinsic difficulty in graph classification under the unified concept of resolution dilemmas''
We propose SLIM'', an inductive neural network model for Structural Landmarking and Interaction Modelling.
arXiv Detail & Related papers (2020-06-29T01:01:42Z) - Structure learning for CTBN's via penalized maximum likelihood methods [2.997206383342421]
We study the structure learning problem, which is a more challenging task and the existing research on this topic is limited.
We prove that our algorithm, under mild regularity conditions, recognizes the dependence structure of the graph with high probability.
arXiv Detail & Related papers (2020-06-13T14:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.