Using Time-Series Privileged Information for Provably Efficient Learning
of Prediction Models
- URL: http://arxiv.org/abs/2110.14993v1
- Date: Thu, 28 Oct 2021 10:07:29 GMT
- Title: Using Time-Series Privileged Information for Provably Efficient Learning
of Prediction Models
- Authors: Rickard Karlsson, Martin Willbo, Zeshan Hussain, Rahul G. Krishnan,
David Sontag, Fredrik D. Johansson
- Abstract summary: We study prediction of future outcomes with supervised models that use privileged information during learning.
privileged information comprises samples of time series observed between the baseline time of prediction and the future outcome.
We show that our approach is generally preferable to classical learning, particularly when data is scarce.
- Score: 6.7015527471908625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study prediction of future outcomes with supervised models that use
privileged information during learning. The privileged information comprises
samples of time series observed between the baseline time of prediction and the
future outcome; this information is only available at training time which
differs from the traditional supervised learning. Our question is when using
this privileged data leads to more sample-efficient learning of models that use
only baseline data for predictions at test time. We give an algorithm for this
setting and prove that when the time series are drawn from a non-stationary
Gaussian-linear dynamical system of fixed horizon, learning with privileged
information is more efficient than learning without it. On synthetic data, we
test the limits of our algorithm and theory, both when our assumptions hold and
when they are violated. On three diverse real-world datasets, we show that our
approach is generally preferable to classical learning, particularly when data
is scarce. Finally, we relate our estimator to a distillation approach both
theoretically and empirically.
Related papers
- TimeRAF: Retrieval-Augmented Foundation model for Zero-shot Time Series Forecasting [59.702504386429126]
TimeRAF is a Retrieval-Augmented Forecasting model that enhance zero-shot time series forecasting through retrieval-augmented techniques.
TimeRAF employs an end-to-end learnable retriever to extract valuable information from the knowledge base.
arXiv Detail & Related papers (2024-12-30T09:06:47Z) - Enhancing Foundation Models for Time Series Forecasting via Wavelet-based Tokenization [74.3339999119713]
We develop a wavelet-based tokenizer that allows models to learn complex representations directly in the space of time-localized frequencies.
Our method first scales and decomposes the input time series, then thresholds and quantizes the wavelet coefficients, and finally pre-trains an autoregressive model to forecast coefficients for the forecast horizon.
arXiv Detail & Related papers (2024-12-06T18:22:59Z) - Beyond Data Scarcity: A Frequency-Driven Framework for Zero-Shot Forecasting [15.431513584239047]
Time series forecasting is critical in numerous real-world applications.
Traditional forecasting techniques struggle when data is scarce or not available at all.
Recent advancements often leverage large-scale foundation models for such tasks.
arXiv Detail & Related papers (2024-11-24T07:44:39Z) - Contrastive Difference Predictive Coding [79.74052624853303]
We introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events.
We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL.
arXiv Detail & Related papers (2023-10-31T03:16:32Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - Mixed moving average field guided learning for spatio-temporal data [0.0]
We define a novel Bayesian-temporal embedding and a theory-guided machine learning approach to make ensemble forecasts.
We use Lipschitz predictors to determine fixed-time and any-time PAC in the batch learning setting.
We then test the performance of our learning methodology by using linear predictors and data sets simulated from a dependence- Ornstein-Uhlenbeck process.
arXiv Detail & Related papers (2023-01-02T16:11:05Z) - Machine Learning Algorithms for Time Series Analysis and Forecasting [0.0]
Time series data is being used everywhere, from sales records to patients' health evolution metrics.
Various statistical and deep learning models have been considered, notably, ARIMA, Prophet and LSTMs.
Our work can be used by anyone to develop a good understanding of the forecasting process, and to identify various state of the art models which are being used today.
arXiv Detail & Related papers (2022-11-25T22:12:03Z) - Efficient learning of nonlinear prediction models with time-series
privileged information [11.679648862014655]
We show that for prediction in linear-Gaussian dynamical systems, a LuPI learner with access to intermediate time series data is never worse than any unbiased classical learner.
We propose algorithms based on random features and representation learning for the case when this map is unknown.
arXiv Detail & Related papers (2022-09-15T05:56:36Z) - A Meta-learning Approach to Reservoir Computing: Time Series Prediction
with Limited Data [0.0]
We present a data-driven approach to automatically extract an appropriate model structure from experimentally observed processes.
We demonstrate our approach on a simple benchmark problem, where it beats the state of the art meta-learning techniques.
arXiv Detail & Related papers (2021-10-07T18:23:14Z) - Learning summary features of time series for likelihood free inference [93.08098361687722]
We present a data-driven strategy for automatically learning summary features from time series data.
Our results indicate that learning summary features from data can compete and even outperform LFI methods based on hand-crafted values.
arXiv Detail & Related papers (2020-12-04T19:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.