HiPerformer: Hierarchically Permutation-Equivariant Transformer for Time
Series Forecasting
- URL: http://arxiv.org/abs/2305.08073v1
- Date: Sun, 14 May 2023 05:11:52 GMT
- Title: HiPerformer: Hierarchically Permutation-Equivariant Transformer for Time
Series Forecasting
- Authors: Ryo Umagami, Yu Ono, Yusuke Mukuta, Tatsuya Harada
- Abstract summary: We propose a hierarchically permutation-equivariant model that considers both the relationship among components in the same group and the relationship among groups.
The experiments conducted on real-world data demonstrate that the proposed method outperforms existing state-of-the-art methods.
- Score: 56.95572957863576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is imperative to discern the relationships between multiple time series
for accurate forecasting. In particular, for stock prices, components are often
divided into groups with the same characteristics, and a model that extracts
relationships consistent with this group structure should be effective. Thus,
we propose the concept of hierarchical permutation-equivariance, focusing on
index swapping of components within and among groups, to design a model that
considers this group structure. When the prediction model has hierarchical
permutation-equivariance, the prediction is consistent with the group
relationships of the components. Therefore, we propose a hierarchically
permutation-equivariant model that considers both the relationship among
components in the same group and the relationship among groups. The experiments
conducted on real-world data demonstrate that the proposed method outperforms
existing state-of-the-art methods.
Related papers
- Approximate learning of parsimonious Bayesian context trees [0.0]
The proposed framework is tested on synthetic and real-world data examples.
It outperforms existing sequence models when fitted to real protein sequences and honeypot computer terminal sessions.
arXiv Detail & Related papers (2024-07-27T11:50:40Z) - Conformal time series decomposition with component-wise exchangeability [41.94295877935867]
We present a novel use of conformal prediction for time series forecasting that incorporates time series decomposition.
We find that the method provides promising results on well-structured time series, but can be limited by factors such as the decomposition step for more complex data.
arXiv Detail & Related papers (2024-06-24T16:23:30Z) - Structured Learning of Compositional Sequential Interventions [5.8613343090506556]
We consider sequential treatment regimes where each unit is exposed to combinations of interventions over time.
Standard black-box approaches mapping sequences of categorical variables to outputs are applicable.
We pose an explicit model for composition, that is, how the effect of sequential interventions can be isolated into modules.
arXiv Detail & Related papers (2024-06-09T11:36:36Z) - Factorized Fusion Shrinkage for Dynamic Relational Data [16.531262817315696]
We consider a factorized fusion shrinkage model in which all decomposed factors are dynamically shrunk towards group-wise fusion structures.
The proposed priors enjoy many favorable properties in comparison and clustering of the estimated dynamic latent factors.
We present a structured mean-field variational inference framework that balances optimal posterior inference with computational scalability.
arXiv Detail & Related papers (2022-09-30T21:03:40Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - Commutative Lie Group VAE for Disentanglement Learning [96.32813624341833]
We view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data.
A simple model named Commutative Lie Group VAE is introduced to realize the group-based disentanglement learning.
Experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state-of-the-art performance without extra constraints.
arXiv Detail & Related papers (2021-06-07T07:03:14Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Robust Grouped Variable Selection Using Distributionally Robust
Optimization [11.383869751239166]
We propose a Distributionally Robust Optimization (DRO) formulation with a Wasserstein-based uncertainty set for selecting grouped variables under perturbations.
We prove probabilistic bounds on the out-of-sample loss and the estimation bias, and establish the grouping effect of our estimator.
We show that our formulation produces an interpretable and parsimonious model that encourages sparsity at a group level.
arXiv Detail & Related papers (2020-06-10T22:32:52Z) - Group Heterogeneity Assessment for Multilevel Models [68.95633278540274]
Many data sets contain an inherent multilevel structure.
Taking this structure into account is critical for the accuracy and calibration of any statistical analysis performed on such data.
We propose a flexible framework for efficiently assessing differences between the levels of given grouping variables in the data.
arXiv Detail & Related papers (2020-05-06T12:42:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.