TreeDRNet:A Robust Deep Model for Long Term Time Series Forecasting
- URL: http://arxiv.org/abs/2206.12106v1
- Date: Fri, 24 Jun 2022 06:53:11 GMT
- Title: TreeDRNet:A Robust Deep Model for Long Term Time Series Forecasting
- Authors: Tian Zhou, Jianqing Zhu, Xue Wang, Ziqing Ma, Qingsong Wen, Liang Sun,
Rong Jin
- Abstract summary: We propose a novel neural network architecture, called TreeDRNet, for more effective long-term forecasting.
Inspired by robust regression, we introduce doubly residual link structure to make prediction more robust.
Our empirical studies show that TreeDRNet is significantly more effective than state-of-the-art methods.
- Score: 24.832101846728925
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Various deep learning models, especially some latest Transformer-based
approaches, have greatly improved the state-of-art performance for long-term
time series forecasting.However, those transformer-based models suffer a severe
deterioration performance with prolonged input length, which prohibits them
from using extended historical info.Moreover, these methods tend to handle
complex examples in long-term forecasting with increased model complexity,
which often leads to a significant increase in computation and less robustness
in performance(e.g., overfitting). We propose a novel neural network
architecture, called TreeDRNet, for more effective long-term forecasting.
Inspired by robust regression, we introduce doubly residual link structure to
make prediction more robust.Built upon Kolmogorov-Arnold representation
theorem, we explicitly introduce feature selection, model ensemble, and a tree
structure to further utilize the extended input sequence, which improves the
robustness and representation power of TreeDRNet. Unlike previous deep models
for sequential forecasting work, TreeDRNet is built entirely on multilayer
perceptron and thus enjoys high computational efficiency. Our extensive
empirical studies show that TreeDRNet is significantly more effective than
state-of-the-art methods, reducing prediction errors by 20% to 40% for
multivariate time series. In particular, TreeDRNet is over 10 times more
efficient than transformer-based methods. The code will be released soon.
Related papers
- DRPruning: Efficient Large Language Model Pruning through Distributionally Robust Optimization [61.492590008258986]
Large language models (LLMs) deliver impressive results but face challenges from increasing model sizes and computational costs.
We propose DRPruning, which incorporates distributionally robust optimization to restore balanced performance across domains.
arXiv Detail & Related papers (2024-11-21T12:02:39Z) - Analysing the Behaviour of Tree-Based Neural Networks in Regression Tasks [3.912345988363511]
This paper endeavours to decode the behaviour of tree-based neural network models in the context of regression challenges.
We extend the application of established models--tree-based CNNs, Code2Vec, and Transformer-based methods--to predict the execution time of source code by parsing it to an AST.
Our proposed dual transformer demonstrates remarkable adaptability and robust performance across diverse datasets.
arXiv Detail & Related papers (2024-06-17T11:47:14Z) - Forecasting with Hyper-Trees [50.72190208487953]
Hyper-Trees are designed to learn the parameters of time series models.
By relating the parameters of a target time series model to features, Hyper-Trees also address the issue of parameter non-stationarity.
In this novel approach, the trees first generate informative representations from the input features, which a shallow network then maps to the target model parameters.
arXiv Detail & Related papers (2024-05-13T15:22:15Z) - Explainable Adaptive Tree-based Model Selection for Time Series
Forecasting [1.0515439489916734]
Tree-based models have been successfully applied to a wide variety of tasks, including time series forecasting.
Many of them suffer from the overfitting problem, which limits their application in real-world decision-making.
We propose a novel method for the online selection of tree-based models using the TreeSHAP explainability method in the task of time series forecasting.
arXiv Detail & Related papers (2024-01-02T09:40:02Z) - SETAR-Tree: A Novel and Accurate Tree Algorithm for Global Time Series
Forecasting [7.206754802573034]
In this paper, we explore the close connections between TAR models and regression trees.
We introduce a new forecasting-specific tree algorithm that trains global Pooled Regression (PR) models in the leaves.
In our evaluation, the proposed tree and forest models are able to achieve significantly higher accuracy than a set of state-of-the-art tree-based algorithms.
arXiv Detail & Related papers (2022-11-16T04:30:42Z) - Social Interpretable Tree for Pedestrian Trajectory Prediction [75.81745697967608]
We propose a tree-based method, termed as Social Interpretable Tree (SIT), to address this multi-modal prediction task.
A path in the tree from the root to leaf represents an individual possible future trajectory.
Despite the hand-crafted tree, the experimental results on ETH-UCY and Stanford Drone datasets demonstrate that our method is capable of matching or exceeding the performance of state-of-the-art methods.
arXiv Detail & Related papers (2022-05-26T12:18:44Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Deep Generative model with Hierarchical Latent Factors for Time Series
Anomaly Detection [40.21502451136054]
This work presents DGHL, a new family of generative models for time series anomaly detection.
A top-down Convolution Network maps a novel hierarchical latent space to time series windows, exploiting temporal dynamics to encode information efficiently.
Our method outperformed current state-of-the-art models on four popular benchmark datasets.
arXiv Detail & Related papers (2022-02-15T17:19:44Z) - Complex Event Forecasting with Prediction Suffix Trees: Extended
Technical Report [70.7321040534471]
Complex Event Recognition (CER) systems have become popular in the past two decades due to their ability to "instantly" detect patterns on real-time streams of events.
There is a lack of methods for forecasting when a pattern might occur before such an occurrence is actually detected by a CER engine.
We present a formal framework that attempts to address the issue of Complex Event Forecasting.
arXiv Detail & Related papers (2021-09-01T09:52:31Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.