Representation Learning for Context-Dependent Decision-Making
- URL: http://arxiv.org/abs/2205.05820v1
- Date: Thu, 12 May 2022 01:06:57 GMT
- Title: Representation Learning for Context-Dependent Decision-Making
- Authors: Yuzhen Qin, Tommaso Menara, Samet Oymak, ShiNung Ching, and Fabio
Pasqualetti
- Abstract summary: We study representation learning in the sequential decision-making scenario with contextual changes.
We propose an online algorithm that is able to learn and transfer context-dependent representations.
- Score: 22.16801879707937
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans are capable of adjusting to changing environments flexibly and
quickly. Empirical evidence has revealed that representation learning plays a
crucial role in endowing humans with such a capability. Inspired by this
observation, we study representation learning in the sequential decision-making
scenario with contextual changes. We propose an online algorithm that is able
to learn and transfer context-dependent representations and show that it
significantly outperforms the existing ones that do not learn representations
adaptively. As a case study, we apply our algorithm to the Wisconsin Card
Sorting Task, a well-established test for the mental flexibility of humans in
sequential decision-making. By comparing our algorithm with the standard
Q-learning and Deep-Q learning algorithms, we demonstrate the benefits of
adaptive representation learning.
Related papers
- Feature-Based vs. GAN-Based Learning from Demonstrations: When and Why [50.191655141020505]
This survey provides a comparative analysis of feature-based and GAN-based approaches to learning from demonstrations.<n>We argue that the dichotomy between feature-based and GAN-based methods is increasingly nuanced.
arXiv Detail & Related papers (2025-07-08T11:45:51Z) - From Memories to Maps: Mechanisms of In-Context Reinforcement Learning in Transformers [2.4554686192257424]
We train a transformer to in-context reinforcement learn in a distribution of planning tasks inspired by rodent behavior.<n>We characterize the learning algorithms that emerge in the model.<n>We find that memory may serve as a computational resource, storing both raw experience and cached computations to support flexible behavior.
arXiv Detail & Related papers (2025-06-24T14:55:43Z) - Sliding Puzzles Gym: A Scalable Benchmark for State Representation in Visual Reinforcement Learning [3.8309622155866583]
We introduce the Sliding Puzzles Gym (SPGym), a benchmark that extends the classic 15-tile puzzle with variable grid sizes and observation spaces.
SPGym allows scaling the representation learning challenge while keeping the latent environment dynamics and algorithmic problem fixed.
Our experiments with both model-free and model-based RL algorithms, with and without explicit representation learning components, show that as the representation challenge scales, SPGym effectively distinguishes agents based on their capabilities.
arXiv Detail & Related papers (2024-10-17T21:23:03Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - A Quantitative Approach to Predicting Representational Learning and
Performance in Neural Networks [5.544128024203989]
Key property of neural networks is how they learn to represent and manipulate input information in order to solve a task.
We introduce a new pseudo-kernel based tool for analyzing and predicting learned representations.
arXiv Detail & Related papers (2023-07-14T18:39:04Z) - Accelerating exploration and representation learning with offline
pre-training [52.6912479800592]
We show that exploration and representation learning can be improved by separately learning two different models from a single offline dataset.
We show that learning a state representation using noise-contrastive estimation and a model of auxiliary reward can significantly improve the sample efficiency on the challenging NetHack benchmark.
arXiv Detail & Related papers (2023-03-31T18:03:30Z) - Understanding Self-Predictive Learning for Reinforcement Learning [61.62067048348786]
We study the learning dynamics of self-predictive learning for reinforcement learning.
We propose a novel self-predictive algorithm that learns two representations simultaneously.
arXiv Detail & Related papers (2022-12-06T20:43:37Z) - An Empirical Investigation of Representation Learning for Imitation [76.48784376425911]
Recent work in vision, reinforcement learning, and NLP has shown that auxiliary representation learning objectives can reduce the need for large amounts of expensive, task-specific data.
We propose a modular framework for constructing representation learning algorithms, then use our framework to evaluate the utility of representation learning for imitation.
arXiv Detail & Related papers (2022-05-16T11:23:42Z) - Non-Stationary Representation Learning in Sequential Linear Bandits [22.16801879707937]
We study representation learning for multi-task decision-making in non-stationary environments.
We propose an online algorithm that facilitates efficient decision-making by learning and transferring non-stationary representations in an adaptive fashion.
arXiv Detail & Related papers (2022-01-13T06:13:03Z) - Curious Representation Learning for Embodied Intelligence [81.21764276106924]
Self-supervised representation learning has achieved remarkable success in recent years.
Yet to build truly intelligent agents, we must construct representation learning algorithms that can learn from environments.
We propose a framework, curious representation learning, which jointly learns a reinforcement learning policy and a visual representation model.
arXiv Detail & Related papers (2021-05-03T17:59:20Z) - Representation Learning by Ranking across multiple tasks [0.0]
We convert the representation learning problem under different tasks into a ranking problem.
By adopting the ranking problem as a unified perspective, representation learning tasks can be solved in a unified manner.
Experiments under various learning tasks, such as classification, retrieval, multi-label learning, and regression, prove the superiority of the representation learning by ranking framework.
arXiv Detail & Related papers (2021-03-28T09:36:36Z) - Provable Representation Learning for Imitation Learning via Bi-level
Optimization [60.059520774789654]
A common strategy in modern learning systems is to learn a representation that is useful for many tasks.
We study this strategy in the imitation learning setting for Markov decision processes (MDPs) where multiple experts' trajectories are available.
We instantiate this framework for the imitation learning settings of behavior cloning and observation-alone.
arXiv Detail & Related papers (2020-02-24T21:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.