Multivariate Quantile Function Forecaster
- URL: http://arxiv.org/abs/2202.11316v1
- Date: Wed, 23 Feb 2022 05:22:03 GMT
- Title: Multivariate Quantile Function Forecaster
- Authors: Kelvin Kan, Fran\c{c}ois-Xavier Aubet, Tim Januschowski, Youngsuk
Park, Konstantinos Benidis, Lars Ruthotto, Jan Gasthaus
- Abstract summary: MQF$2$ is a global probabilistic forecasting method constructed using a multivariate quantile function.
We show that our model has comparable performance with state-of-the-art methods in terms of single time step metrics.
- Score: 15.379236429115913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose Multivariate Quantile Function Forecaster (MQF$^2$), a global
probabilistic forecasting method constructed using a multivariate quantile
function and investigate its application to multi-horizon forecasting. Prior
approaches are either autoregressive, implicitly capturing the dependency
structure across time but exhibiting error accumulation with increasing
forecast horizons, or multi-horizon sequence-to-sequence models, which do not
exhibit error accumulation, but also do typically not model the dependency
structure across time steps. MQF$^2$ combines the benefits of both approaches,
by directly making predictions in the form of a multivariate quantile function,
defined as the gradient of a convex function which we parametrize using
input-convex neural networks. By design, the quantile function is monotone with
respect to the input quantile levels and hence avoids quantile crossing. We
provide two options to train MQF$^2$: with energy score or with maximum
likelihood. Experimental results on real-world and synthetic datasets show that
our model has comparable performance with state-of-the-art methods in terms of
single time step metrics while capturing the time dependency structure.
Related papers
- UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting [98.12558945781693]
We propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens.
Although our proposed model employs a simple architecture, it offers compelling performance as shown in our experiments on several datasets for time series forecasting.
arXiv Detail & Related papers (2024-06-07T14:39:28Z) - Generative machine learning methods for multivariate ensemble
post-processing [2.266704492832475]
We present a novel class of nonparametric data-driven distributional regression models based on generative machine learning.
In two case studies, our generative model shows significant improvements over state-of-the-art methods.
arXiv Detail & Related papers (2022-09-26T09:02:30Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Deep Non-Crossing Quantiles through the Partial Derivative [0.6299766708197883]
Quantile Regression provides a way to approximate a single conditional quantile.
Minimisation of the QR-loss function does not guarantee non-crossing quantiles.
We propose a generic deep learning algorithm for predicting an arbitrary number of quantiles.
arXiv Detail & Related papers (2022-01-30T15:35:21Z) - Learning Quantile Functions without Quantile Crossing for
Distribution-free Time Series Forecasting [12.269597033369557]
We propose the Incremental (Spline) Quantile Functions I(S)QF, a flexible and efficient distribution-free quantile estimation framework.
We also provide a generalization error analysis of our proposed approaches under the sequence-to-sequence setting.
arXiv Detail & Related papers (2021-11-12T06:54:48Z) - Quantum Quantile Mechanics: Solving Stochastic Differential Equations
for Generating Time-Series [19.830330492689978]
We propose a quantum algorithm for sampling from a solution of differential equations (SDEs)
We represent the quantile function for an underlying probability distribution and extract samples as expectation values.
We test the method by simulating the Ornstein-Uhlenbeck process and sampling at times different from the initial point.
arXiv Detail & Related papers (2021-08-06T16:14:24Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Multi-Task Learning for Multi-Dimensional Regression: Application to
Luminescence Sensing [0.0]
A new approach to non-linear regression is to use neural networks, particularly feed-forward architectures with a sufficient number of hidden layers and an appropriate number of output neurons.
We propose multi-task learning (MTL) architectures. These are characterized by multiple branches of task-specific layers, which have as input the output of a common set of layers.
To demonstrate the power of this approach for multi-dimensional regression, the method is applied to luminescence sensing.
arXiv Detail & Related papers (2020-07-27T21:23:51Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.