Frequency Adaptive Normalization For Non-stationary Time Series Forecasting
- URL: http://arxiv.org/abs/2409.20371v1
- Date: Mon, 30 Sep 2024 15:07:16 GMT
- Title: Frequency Adaptive Normalization For Non-stationary Time Series Forecasting
- Authors: Weiwei Ye, Songgaojun Deng, Qiaosha Zou, Ning Gui,
- Abstract summary: Time series forecasting needs to address non-stationary data with evolving trend and seasonal patterns.
To address the non-stationarity, instance normalization has been recently proposed to alleviate impacts from the trend with certain statistical measures.
This paper proposes a new instance normalization solution, called frequency adaptive normalization (FAN), which extends instance normalization in handling both dynamic trend and seasonal patterns.
- Score: 7.881136718623066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series forecasting typically needs to address non-stationary data with evolving trend and seasonal patterns. To address the non-stationarity, reversible instance normalization has been recently proposed to alleviate impacts from the trend with certain statistical measures, e.g., mean and variance. Although they demonstrate improved predictive accuracy, they are limited to expressing basic trends and are incapable of handling seasonal patterns. To address this limitation, this paper proposes a new instance normalization solution, called frequency adaptive normalization (FAN), which extends instance normalization in handling both dynamic trend and seasonal patterns. Specifically, we employ the Fourier transform to identify instance-wise predominant frequent components that cover most non-stationary factors. Furthermore, the discrepancy of those frequency components between inputs and outputs is explicitly modeled as a prediction task with a simple MLP model. FAN is a model-agnostic method that can be applied to arbitrary predictive backbones. We instantiate FAN on four widely used forecasting models as the backbone and evaluate their prediction performance improvements on eight benchmark datasets. FAN demonstrates significant performance advancement, achieving 7.76% ~ 37.90% average improvements in MSE.
Related papers
- GAS-Norm: Score-Driven Adaptive Normalization for Non-Stationary Time Series Forecasting in Deep Learning [1.642449952957482]
We show how changes in the mean and variance of the input data can disrupt the predictive capability of a deep neural network (DNN)
We introduce GAS-Norm, a novel methodology for adaptive time series normalization and forecasting based on the combination of a Generalized Autoregressive Score (GAS) model and a Deep Neural Network.
Results show that deep forecasting models improve their performance in 21 out of 25 settings when combined with GAS-Norm compared to other normalization methods.
arXiv Detail & Related papers (2024-10-04T21:26:12Z) - Evolving Multi-Scale Normalization for Time Series Forecasting under Distribution Shifts [20.02869280775877]
We propose a novel model-agnostic Evolving Multi-Scale Normalization (EvoMSN) framework to tackle the distribution shift problem.
We evaluate the effectiveness of EvoMSN in improving the performance of five mainstream forecasting methods on benchmark datasets.
arXiv Detail & Related papers (2024-09-29T14:26:22Z) - Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Post-Hoc Reversal: Are We Selecting Models Prematurely? [13.910702424593797]
We show a phenomenon that we call post-hoc reversal, where performance trends are reversed after applying post-hoc transforms.
Preliminary analyses suggest that these transforms induce reversal by suppressing the influence of mislabeled examples.
We propose post-hoc selection, a simple technique whereby post-hoc metrics inform model development decisions.
arXiv Detail & Related papers (2024-04-11T14:58:19Z) - Test-Time Model Adaptation with Only Forward Passes [68.11784295706995]
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts.
We propose a test-time Forward-Optimization Adaptation (FOA) method.
FOA runs on quantized 8-bit ViT, outperforms gradient-based TENT on full-precision 32-bit ViT, and achieves an up to 24-fold memory reduction on ImageNet-C.
arXiv Detail & Related papers (2024-04-02T05:34:33Z) - FreDF: Learning to Forecast in Frequency Domain [56.24773675942897]
Time series modeling is uniquely challenged by the presence of autocorrelation in both historical and label sequences.
We introduce the Frequency-enhanced Direct Forecast (FreDF) which bypasses the complexity of label autocorrelation by learning to forecast in the frequency domain.
arXiv Detail & Related papers (2024-02-04T08:23:41Z) - Addressing Distribution Shift in Time Series Forecasting with Instance
Normalization Flows [36.956983415564274]
We propose a general decoupled formulation for time series forecasting.
We make such a formulation formalized into a bi-level optimization problem.
Our method consistently outperforms state-of-the-art baselines on both synthetic and real-world data.
arXiv Detail & Related papers (2024-01-30T06:35:52Z) - Data Attribution for Diffusion Models: Timestep-induced Bias in Influence Estimation [53.27596811146316]
Diffusion models operate over a sequence of timesteps instead of instantaneous input-output relationships in previous contexts.
We present Diffusion-TracIn that incorporates this temporal dynamics and observe that samples' loss gradient norms are highly dependent on timestep.
We introduce Diffusion-ReTrac as a re-normalized adaptation that enables the retrieval of training samples more targeted to the test sample of interest.
arXiv Detail & Related papers (2024-01-17T07:58:18Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.