Hierarchically Coherent Multivariate Mixture Networks
- URL: http://arxiv.org/abs/2305.07089v2
- Date: Mon, 16 Oct 2023 18:37:06 GMT
- Title: Hierarchically Coherent Multivariate Mixture Networks
- Authors: Kin G. Olivares, David Luo, Cristian Challu, Stefania La Vattiata, Max
Mergenthaler, Artur Dubrawski
- Abstract summary: Probabilistic coherent forecasting is tasked to produce forecasts consistent across levels of aggregation.
We optimize the networks with a composite likelihood objective, allowing us to capture time series' relationships.
Our approach demonstrates 13.2% average accuracy improvements on most datasets compared to state-of-the-art baselines.
- Score: 11.40498954142061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large collections of time series data are often organized into hierarchies
with different levels of aggregation; examples include product and geographical
groupings. Probabilistic coherent forecasting is tasked to produce forecasts
consistent across levels of aggregation. In this study, we propose to augment
neural forecasting architectures with a coherent multivariate mixture output.
We optimize the networks with a composite likelihood objective, allowing us to
capture time series' relationships while maintaining high computational
efficiency. Our approach demonstrates 13.2% average accuracy improvements on
most datasets compared to state-of-the-art baselines. We conduct ablation
studies of the framework components and provide theoretical foundations for
them. To assist related work, the code is available at this
https://github.com/Nixtla/neuralforecast.
Related papers
- MGCP: A Multi-Grained Correlation based Prediction Network for Multivariate Time Series [54.91026286579748]
We propose a Multi-Grained Correlations-based Prediction Network.
It simultaneously considers correlations at three levels to enhance prediction performance.
It employs adversarial training with an attention mechanism-based predictor and conditional discriminator to optimize prediction results at coarse-grained level.
arXiv Detail & Related papers (2024-05-30T03:32:44Z) - RGM: A Robust Generalizable Matching Model [49.60975442871967]
We propose a deep model for sparse and dense matching, termed RGM (Robust Generalist Matching)
To narrow the gap between synthetic training samples and real-world scenarios, we build a new, large-scale dataset with sparse correspondence ground truth.
We are able to mix up various dense and sparse matching datasets, significantly improving the training diversity.
arXiv Detail & Related papers (2023-10-18T07:30:08Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2023-10-17T20:30:16Z) - Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models [61.10851158749843]
Key insights can be obtained by discovering lead-lag relationships inherent in the data.
We develop a clustering-driven methodology for robust detection of lead-lag relationships in lagged multi-factor models.
arXiv Detail & Related papers (2023-05-11T10:30:35Z) - MECATS: Mixture-of-Experts for Quantile Forecasts of Aggregated Time
Series [11.826510794042548]
We introduce a mixture of heterogeneous experts framework called textttMECATS.
It simultaneously forecasts the values of a set of time series that are related through an aggregation hierarchy.
Different types of forecasting models can be employed as individual experts so that the form of each model can be tailored to the nature of the corresponding time series.
arXiv Detail & Related papers (2021-12-22T05:05:30Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Probabilistic Hierarchical Forecasting with Deep Poisson Mixtures [2.1670528702668648]
We present a novel method capable of accurate and coherent probabilistic forecasts for time series when reliable hierarchical information is present.
We call it Deep Poisson Mixture Network (DPMN)
It relies on the combination of neural networks and a statistical model for the joint distribution of the hierarchical time series structure.
arXiv Detail & Related papers (2021-10-25T18:02:03Z) - Hierarchically Regularized Deep Forecasting [18.539846932184012]
We propose a new approach for hierarchical forecasting based on decomposing the time series along a global set of basis time series.
Unlike past methods, our approach is scalable at inference-time while preserving coherence among the time series forecasts.
arXiv Detail & Related papers (2021-06-14T17:38:16Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z) - Anytime Inference with Distilled Hierarchical Neural Ensembles [32.003196185519]
Inference in deep neural networks can be computationally expensive, and networks capable of anytime inference are important in mscenarios where the amount of compute or quantity of input data varies over time.
We propose Hierarchical Neural Ensembles (HNE), a novel framework to embed an ensemble of multiple networks in a hierarchical tree structure, sharing intermediate layers.
Our experiments show that, compared to previous anytime inference models, HNE provides state-of-the-art accuracy-computate trade-offs on the CIFAR-10/100 and ImageNet datasets.
arXiv Detail & Related papers (2020-03-03T12:13:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.