Auxiliary Quantile Forecasting with Linear Networks
- URL: http://arxiv.org/abs/2212.02578v1
- Date: Mon, 5 Dec 2022 20:09:32 GMT
- Title: Auxiliary Quantile Forecasting with Linear Networks
- Authors: Shayan Jawed, Lars Schmidt-Thieme
- Abstract summary: We propose a novel multi-task method for quantile forecasting with shared Linear layers.
Our method is based on the Implicit quantile learning approach.
We show learning auxiliary quantile tasks leads to state-of-the-art performance on deterministic forecasting benchmarks.
- Score: 6.155158115218501
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel multi-task method for quantile forecasting with shared
Linear layers. Our method is based on the Implicit quantile learning approach,
where samples from the Uniform distribution $\mathcal{U}(0, 1)$ are
reparameterized to quantile values of the target distribution. We combine the
implicit quantile and input time series representations to directly forecast
multiple quantile estimations for multiple horizons jointly. Prior works have
adopted a Linear layer for the direct estimation of all forecasting horizons in
a multi-task learning setup. We show that following similar intuition from
multi-task learning to exploit correlations among forecast horizons, we can
model multiple quantile estimates as auxiliary tasks for each of the forecast
horizon to improve forecast accuracy across the quantile estimates compared to
modeling only a single quantile estimate. We show learning auxiliary quantile
tasks leads to state-of-the-art performance on deterministic forecasting
benchmarks concerning the main-task of forecasting the 50$^{th}$ percentile
estimate.
Related papers
- Semiparametric conformal prediction [79.6147286161434]
Risk-sensitive applications require well-calibrated prediction sets over multiple, potentially correlated target variables.
We treat the scores as random vectors and aim to construct the prediction set accounting for their joint correlation structure.
We report desired coverage and competitive efficiency on a range of real-world regression problems.
arXiv Detail & Related papers (2024-11-04T14:29:02Z) - Stacking for Probabilistic Short-term Load Forecasting [1.6317061277457001]
We introduce both global and local variants of meta-learning.
In the local-learning mode, the meta-model is trained using patterns most similar to the query pattern.
Our findings underscored the superiority of quantile regression forest over its competitors.
arXiv Detail & Related papers (2024-06-15T19:05:49Z) - Private Statistical Estimation of Many Quantiles [0.41232474244672235]
Given a distribution and access to i.i.d. samples, we study the estimation of the inverse of its cumulative distribution function (the quantile function) at specific points.
This work studies the estimation of many statistical quantiles under differential privacy.
arXiv Detail & Related papers (2023-02-14T09:59:56Z) - Learning Quantile Functions without Quantile Crossing for
Distribution-free Time Series Forecasting [12.269597033369557]
We propose the Incremental (Spline) Quantile Functions I(S)QF, a flexible and efficient distribution-free quantile estimation framework.
We also provide a generalization error analysis of our proposed approaches under the sequence-to-sequence setting.
arXiv Detail & Related papers (2021-11-12T06:54:48Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Flexible Model Aggregation for Quantile Regression [92.63075261170302]
Quantile regression is a fundamental problem in statistical learning motivated by a need to quantify uncertainty in predictions.
We investigate methods for aggregating any number of conditional quantile models.
All of the models we consider in this paper can be fit using modern deep learning toolkits.
arXiv Detail & Related papers (2021-02-26T23:21:16Z) - Regularization Strategies for Quantile Regression [8.232258589877942]
We show that minimizing an expected pinball loss over a continuous distribution of quantiles is a good regularizer even when only predicting a specific quantile.
We show that lattice models enable regularizing the predicted distribution to a location-scale family.
arXiv Detail & Related papers (2021-02-09T21:10:35Z) - Quantile Surfaces -- Generalizing Quantile Regression to Multivariate
Targets [4.979758772307178]
Our approach is based on an extension of single-output quantile regression (QR) to multivariate-targets, called quantile surfaces (QS)
We present a novel two-stage process: In the first stage, we perform a deterministic point forecast (i.e., central tendency estimation)
Subsequently, we model the prediction uncertainty using QS involving neural networks called quantile surface regression neural networks (QSNN)
We evaluate our novel approach on synthetic data and two currently researched real-world challenges in two different domains: First, probabilistic forecasting for renewable energy power generation, second, short-term cyclists trajectory forecasting for
arXiv Detail & Related papers (2020-09-29T16:35:37Z) - SMART: Simultaneous Multi-Agent Recurrent Trajectory Prediction [72.37440317774556]
We propose advances that address two key challenges in future trajectory prediction.
multimodality in both training data and predictions and constant time inference regardless of number of agents.
arXiv Detail & Related papers (2020-07-26T08:17:10Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z) - CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus [62.86856923633923]
We present a robust estimator for fitting multiple parametric models of the same form to noisy measurements.
In contrast to previous works, which resorted to hand-crafted search strategies for multiple model detection, we learn the search strategy from data.
For self-supervised learning of the search, we evaluate the proposed algorithm on multi-homography estimation and demonstrate an accuracy that is superior to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-08T17:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.