Generalized Decision Transformer for Offline Hindsight Information
Matching
- URL: http://arxiv.org/abs/2111.10364v2
- Date: Tue, 23 Nov 2021 13:05:44 GMT
- Title: Generalized Decision Transformer for Offline Hindsight Information
Matching
- Authors: Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu
- Abstract summary: We present Generalized Decision Transformer (GDT) for solving any hindsight information matching (HIM) problem.
We show how different choices for the feature function and the anti-causal aggregator lead to novel Categorical DT (CDT) and Bi-directional DT (BDT) for matching different statistics of the future.
- Score: 16.7594941269479
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How to extract as much learning signal from each trajectory data has been a
key problem in reinforcement learning (RL), where sample inefficiency has posed
serious challenges for practical applications. Recent works have shown that
using expressive policy function approximators and conditioning on future
trajectory information -- such as future states in hindsight experience replay
or returns-to-go in Decision Transformer (DT) -- enables efficient learning of
multi-task policies, where at times online RL is fully replaced by offline
behavioral cloning, e.g. sequence modeling. We demonstrate that all these
approaches are doing hindsight information matching (HIM) -- training policies
that can output the rest of trajectory that matches some statistics of future
state information. We present Generalized Decision Transformer (GDT) for
solving any HIM problem, and show how different choices for the feature
function and the anti-causal aggregator not only recover DT as a special case,
but also lead to novel Categorical DT (CDT) and Bi-directional DT (BDT) for
matching different statistics of the future. For evaluating CDT and BDT, we
define offline multi-task state-marginal matching (SMM) and imitation learning
(IL) as two generic HIM problems, propose a Wasserstein distance loss as a
metric for both, and empirically study them on MuJoCo continuous control
benchmarks. CDT, which simply replaces anti-causal summation with anti-causal
binning in DT, enables the first effective offline multi-task SMM algorithm
that generalizes well to unseen and even synthetic multi-modal state-feature
distributions. BDT, which uses an anti-causal second transformer as the
aggregator, can learn to model any statistics of the future and outperforms DT
variants in offline multi-task IL. Our generalized formulations from HIM and
GDT greatly expand the role of powerful sequence modeling architectures in
modern RL.
Related papers
- RGMDT: Return-Gap-Minimizing Decision Tree Extraction in Non-Euclidean Metric Space [28.273737052758907]
We introduce an upper bound on the return gap between the oracle expert policy and an optimal decision tree policy.
This enables us to recast the DT extraction problem into a novel non-euclidean clustering problem over the local observation and action values space of each agent.
We also propose the Return-Gap-Minimization Decision Tree (RGMDT) algorithm, which is a surprisingly simple design and is integrated with reinforcement learning.
arXiv Detail & Related papers (2024-10-21T21:19:49Z) - Pre-trained Language Models Improve the Few-shot Prompt Ability of Decision Transformer [10.338170161831496]
Decision Transformer (DT) has emerged as a promising class of algorithms in offline reinforcement learning (RL) tasks.
We introduce the Language model-d Prompt Transformer (LPDT), which leverages pre-trained language models for meta-RL tasks and fine-tunes the model using Low-rank Adaptation (LoRA)
Our approach integrates pre-trained language model and RL tasks seamlessly.
arXiv Detail & Related papers (2024-08-02T17:25:34Z) - Multi-Source and Test-Time Domain Adaptation on Multivariate Signals using Spatio-Temporal Monge Alignment [59.75420353684495]
Machine learning applications on signals such as computer vision or biomedical data often face challenges due to the variability that exists across hardware devices or session recordings.
In this work, we propose Spatio-Temporal Monge Alignment (STMA) to mitigate these variabilities.
We show that STMA leads to significant and consistent performance gains between datasets acquired with very different settings.
arXiv Detail & Related papers (2024-07-19T13:33:38Z) - Decision Mamba: A Multi-Grained State Space Model with Self-Evolution Regularization for Offline RL [57.202733701029594]
Decision Mamba is a novel multi-grained state space model with a self-evolving policy learning strategy.
To mitigate the overfitting issue on noisy trajectories, a self-evolving policy is proposed by using progressive regularization.
The policy evolves by using its own past knowledge to refine the suboptimal actions, thus enhancing its robustness on noisy demonstrations.
arXiv Detail & Related papers (2024-06-08T10:12:00Z) - Solving Continual Offline Reinforcement Learning with Decision Transformer [78.59473797783673]
Continuous offline reinforcement learning (CORL) combines continuous and offline reinforcement learning.
Existing methods, employing Actor-Critic structures and experience replay (ER), suffer from distribution shifts, low efficiency, and weak knowledge-sharing.
We introduce multi-head DT (MH-DT) and low-rank adaptation DT (LoRA-DT) to mitigate DT's forgetting problem.
arXiv Detail & Related papers (2024-01-16T16:28:32Z) - Task-Distributionally Robust Data-Free Meta-Learning [99.56612787882334]
Data-Free Meta-Learning (DFML) aims to efficiently learn new tasks by leveraging multiple pre-trained models without requiring their original training data.
For the first time, we reveal two major challenges hindering their practical deployments: Task-Distribution Shift ( TDS) and Task-Distribution Corruption (TDC)
arXiv Detail & Related papers (2023-11-23T15:46:54Z) - Distribution-Aware Continual Test-Time Adaptation for Semantic Segmentation [33.75630514826721]
We propose a distribution-aware tuning ( DAT) method to make semantic segmentation CTTA efficient and practical in real-world applications.
DAT adaptively selects and updates two small groups of trainable parameters based on data distribution during the continual adaptation process.
We conduct experiments on two widely-used semantic segmentation CTTA benchmarks, achieving promising performance compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2023-09-24T10:48:20Z) - Graph Decision Transformer [83.76329715043205]
Graph Decision Transformer (GDT) is a novel offline reinforcement learning approach.
GDT models the input sequence into a causal graph to capture potential dependencies between fundamentally different concepts.
Our experiments show that GDT matches or surpasses the performance of state-of-the-art offline RL methods on image-based Atari and OpenAI Gym.
arXiv Detail & Related papers (2023-03-07T09:10:34Z) - DBT-DMAE: An Effective Multivariate Time Series Pre-Train Model under
Missing Data [16.589715330897906]
MTS suffers from missing data problems, which leads to degradation or collapse of the downstream tasks.
This paper presents a universally applicable MTS pre-train model,.
-DMAE, to conquer the abovementioned obstacle.
arXiv Detail & Related papers (2022-09-16T08:54:02Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.