Information-Theoretic Abstractions for Planning in Agents with
Computational Constraints
- URL: http://arxiv.org/abs/2005.09611v2
- Date: Tue, 23 Feb 2021 23:29:56 GMT
- Title: Information-Theoretic Abstractions for Planning in Agents with
Computational Constraints
- Authors: Daniel T. Larsson and Dipankar Maity and Panagiotis Tsiotras
- Abstract summary: We show how a path-planning problem in an environment can be systematically approximated by solving a sequence of easier to solve problems on abstractions of the original space.
A numerical example is presented to show the utility of the approach and to corroborate the theoretical findings.
- Score: 16.565205172451662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we develop a framework for path-planning on abstractions that
are not provided to the agent a priori but instead emerge as a function of the
available computational resources. We show how a path-planning problem in an
environment can be systematically approximated by solving a sequence of easier
to solve problems on abstractions of the original space. The properties of the
problem are analyzed, and a number of theoretical results are presented and
discussed. A numerical example is presented to show the utility of the approach
and to corroborate the theoretical findings. We conclude by providing a
discussion detailing the connections of the proposed approach to anytime
algorithms and bounded rationality.
Related papers
- Symbolic Parameter Learning in Probabilistic Answer Set Programming [0.16385815610837165]
We propose two algorithms to solve the formalism of Proabilistic Set Programming.
The first solves the task using an off-the-shelf constrained optimization solver.
The second is based on an implementation of the Expectation Maximization algorithm.
arXiv Detail & Related papers (2024-08-16T13:32:47Z) - Provable Representation with Efficient Planning for Partial Observable Reinforcement Learning [74.67655210734338]
In most real-world reinforcement learning applications, state information is only partially observable, which breaks the Markov decision process assumption.
We develop a representation-based perspective that leads to a coherent framework and tractable algorithmic approach for practical reinforcement learning from partial observations.
We empirically demonstrate the proposed algorithm can surpass state-of-the-art performance with partial observations across various benchmarks.
arXiv Detail & Related papers (2023-11-20T23:56:58Z) - Consciousness-Inspired Spatio-Temporal Abstractions for Better Generalization in Reinforcement Learning [83.41487567765871]
Skipper is a model-based reinforcement learning framework.
It automatically generalizes the task given into smaller, more manageable subtasks.
It enables sparse decision-making and focused abstractions on the relevant parts of the environment.
arXiv Detail & Related papers (2023-09-30T02:25:18Z) - Simple Steps to Success: Axiomatics of Distance-Based Algorithmic
Recourse [13.207786673115296]
We present Stepwise Explainable Paths (StEP), an axiomatically justified framework to compute direction-based algorithmic recourse.
StEP offers provable privacy and robustness guarantees, and outperforms the state-of-the-art on several established recourse desiderata.
arXiv Detail & Related papers (2023-06-27T15:35:22Z) - A Linear Programming Approach for Resource-Aware Information-Theoretic
Tree Abstractions [16.565205172451662]
We present an integer linear programming formulation for the problem of obtaining task-relevant, multi-resolution, environment abstractions for resource-constrained autonomous agents.
We show how hierarchical tree structures, signal encoders, and information-theoretic methods for signal compression can be unified under a common theme.
It is shown that the resulting information-theoretic abstraction problem over the space of multi-resolution trees can be formulated as a integer linear programming (ILP) problem.
arXiv Detail & Related papers (2022-08-08T15:44:09Z) - Geometric Methods for Sampling, Optimisation, Inference and Adaptive
Agents [102.42623636238399]
We identify fundamental geometric structures that underlie the problems of sampling, optimisation, inference and adaptive decision-making.
We derive algorithms that exploit these geometric structures to solve these problems efficiently.
arXiv Detail & Related papers (2022-03-20T16:23:17Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z) - Information-Theoretic Abstractions for Resource-Constrained Agents via
Mixed-Integer Linear Programming [11.247580943940916]
We present a mixed-integer linear programming formulation for the problem of obtaining task-relevant, multi-resolution, graph abstractions for resource-constrained agents.
A non-trivial numerical example is presented to demonstrate the utility in employing our approach to obtain hierarchical tree abstractions for resource-limited agents.
arXiv Detail & Related papers (2021-02-19T16:34:47Z) - Loss Bounds for Approximate Influence-Based Abstraction [81.13024471616417]
Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
arXiv Detail & Related papers (2020-11-03T15:33:10Z) - A General Framework for Consistent Structured Prediction with Implicit
Loss Embeddings [113.15416137912399]
We propose and analyze a novel theoretical and algorithmic framework for structured prediction.
We study a large class of loss functions that implicitly defines a suitable geometry on the problem.
When dealing with output spaces with infinite cardinality, a suitable implicit formulation of the estimator is shown to be crucial.
arXiv Detail & Related papers (2020-02-13T10:30:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.