Contrastive Difference Predictive Coding
- URL: http://arxiv.org/abs/2310.20141v2
- Date: Mon, 26 Feb 2024 01:50:57 GMT
- Title: Contrastive Difference Predictive Coding
- Authors: Chongyi Zheng, Ruslan Salakhutdinov, Benjamin Eysenbach
- Abstract summary: We introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events.
We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL.
- Score: 79.74052624853303
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting and reasoning about the future lie at the heart of many
time-series questions. For example, goal-conditioned reinforcement learning can
be viewed as learning representations to predict which states are likely to be
visited in the future. While prior methods have used contrastive predictive
coding to model time series data, learning representations that encode
long-term dependencies usually requires large amounts of data. In this paper,
we introduce a temporal difference version of contrastive predictive coding
that stitches together pieces of different time series data to decrease the
amount of data required to learn predictions of future events. We apply this
representation learning method to derive an off-policy algorithm for
goal-conditioned RL. Experiments demonstrate that, compared with prior RL
methods, ours achieves $2 \times$ median improvement in success rates and can
better cope with stochastic environments. In tabular settings, we show that our
method is about $20 \times$ more sample efficient than the successor
representation and $1500 \times$ more sample efficient than the standard (Monte
Carlo) version of contrastive predictive coding.
Related papers
- Parametric Augmentation for Time Series Contrastive Learning [33.47157775532995]
We create positive examples that assist the model in learning robust and discriminative representations.
Usually, preset human intuition directs the selection of relevant data augmentations.
We propose a contrastive learning framework with parametric augmentation, AutoTCL, which can be adaptively employed to support time series representation learning.
arXiv Detail & Related papers (2024-02-16T03:51:14Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - VQ-AR: Vector Quantized Autoregressive Probabilistic Time Series
Forecasting [10.605719154114354]
Time series models aim for accurate predictions of the future given the past, where the forecasts are used for important downstream tasks like business decision making.
In this paper, we introduce a novel autoregressive architecture, VQ-AR, which instead learns a emphdiscrete set of representations that are used to predict the future.
arXiv Detail & Related papers (2022-05-31T15:43:46Z) - Time-Series Imputation with Wasserstein Interpolation for Optimal
Look-Ahead-Bias and Variance Tradeoff [66.59869239999459]
In finance, imputation of missing returns may be applied prior to training a portfolio optimization model.
There is an inherent trade-off between the look-ahead-bias of using the full data set for imputation and the larger variance in the imputation from using only the training data.
We propose a Bayesian posterior consensus distribution which optimally controls the variance and look-ahead-bias trade-off in the imputation.
arXiv Detail & Related papers (2021-02-25T09:05:35Z) - $k$-Neighbor Based Curriculum Sampling for Sequence Prediction [22.631763991832862]
Multi-step ahead prediction in language models is challenging due to discrepancy between training and test time processes.
We propose textitNearest-Neighbor Replacement Sampling -- a curriculum learning-based method that gradually changes an initially deterministic teacher policy.
We report our findings on two language modelling benchmarks and find that the proposed method further improves performance when used in conjunction with scheduled sampling.
arXiv Detail & Related papers (2021-01-22T20:07:29Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z) - Value-driven Hindsight Modelling [68.658900923595]
Value estimation is a critical component of the reinforcement learning (RL) paradigm.
Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function.
We develop an approach for representation learning in RL that sits in between these two extremes.
This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function.
arXiv Detail & Related papers (2020-02-19T18:10:20Z) - Conditional Mutual information-based Contrastive Loss for Financial Time
Series Forecasting [12.0855096102517]
We present a representation learning framework for financial time series forecasting.
In this paper, we propose to first learn compact representations from time series data, then use the learned representations to train a simpler model for predicting time series movements.
arXiv Detail & Related papers (2020-02-18T15:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.