Extended Deep Adaptive Input Normalization for Preprocessing Time Series
Data for Neural Networks
- URL: http://arxiv.org/abs/2310.14720v2
- Date: Thu, 29 Feb 2024 08:30:03 GMT
- Title: Extended Deep Adaptive Input Normalization for Preprocessing Time Series
Data for Neural Networks
- Authors: Marcus A. K. September, Francesco Sanna Passino, Leonie Goldmann,
Anton Hinel
- Abstract summary: We propose the EDAIN layer, a novel adaptive neural layer that learns how to appropriately normalize irregular time series data for a given task in an end-to-end fashion.
Our experiments, conducted using synthetic data, a credit default prediction dataset, and a large-scale limit order book benchmark dataset, demonstrate the superior performance of the EDAIN layer.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data preprocessing is a crucial part of any machine learning pipeline, and it
can have a significant impact on both performance and training efficiency. This
is especially evident when using deep neural networks for time series
prediction and classification: real-world time series data often exhibit
irregularities such as multi-modality, skewness and outliers, and the model
performance can degrade rapidly if these characteristics are not adequately
addressed. In this work, we propose the EDAIN (Extended Deep Adaptive Input
Normalization) layer, a novel adaptive neural layer that learns how to
appropriately normalize irregular time series data for a given task in an
end-to-end fashion, instead of using a fixed normalization scheme. This is
achieved by optimizing its unknown parameters simultaneously with the deep
neural network using back-propagation. Our experiments, conducted using
synthetic data, a credit default prediction dataset, and a large-scale limit
order book benchmark dataset, demonstrate the superior performance of the EDAIN
layer when compared to conventional normalization methods and existing adaptive
time series preprocessing layers.
Related papers
- Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - MAD: Self-Supervised Masked Anomaly Detection Task for Multivariate Time
Series [14.236092062538653]
Masked Anomaly Detection (MAD) is a general self-supervised learning task for multivariate time series anomaly detection.
By randomly masking a portion of the inputs and training a model to estimate them, MAD is an improvement over the traditional left-to-right next step prediction (NSP) task.
Our experimental results demonstrate that MAD can achieve better anomaly detection rates over traditional NSP approaches.
arXiv Detail & Related papers (2022-05-04T14:55:42Z) - Bilinear Input Normalization for Neural Networks in Financial
Forecasting [101.89872650510074]
We propose a novel data-driven normalization method for deep neural networks that handle high-frequency financial time-series.
The proposed normalization scheme takes into account the bimodal characteristic of financial time-series.
Our experiments, conducted with state-of-the-arts neural networks and high-frequency data, show significant improvements over other normalization techniques.
arXiv Detail & Related papers (2021-09-01T07:52:03Z) - Time Series is a Special Sequence: Forecasting with Sample Convolution
and Interaction [9.449017120452675]
Time series is a special type of sequence data, a set of observations collected at even intervals of time and ordered chronologically.
Existing deep learning techniques use generic sequence models for time series analysis, which ignore some of its unique properties.
We propose a novel neural network architecture and apply it for the time series forecasting problem, wherein we conduct sample convolution and interaction at multiple resolutions for temporal modeling.
arXiv Detail & Related papers (2021-06-17T08:15:04Z) - Adjusting for Autocorrelated Errors in Neural Networks for Time Series
Regression and Forecasting [10.659189276058948]
We learn the autocorrelation coefficient jointly with the model parameters in order to adjust for autocorrelated errors.
For time series regression, large-scale experiments indicate that our method outperforms the Prais-Winsten method.
Results across a wide range of real-world datasets show that our method enhances performance in almost all cases.
arXiv Detail & Related papers (2021-01-28T04:25:51Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Improved Predictive Deep Temporal Neural Networks with Trend Filtering [22.352437268596674]
We propose a new prediction framework based on deep neural networks and a trend filtering.
We reveal that the predictive performance of deep temporal neural networks improves when the training data is temporally processed by a trend filtering.
arXiv Detail & Related papers (2020-10-16T08:29:36Z) - Optimization Theory for ReLU Neural Networks Trained with Normalization
Layers [82.61117235807606]
The success of deep neural networks in part due to the use of normalization layers.
Our analysis shows how the introduction of normalization changes the landscape and can enable faster activation.
arXiv Detail & Related papers (2020-06-11T23:55:54Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.