Distribution estimation and change-point detection for time series via
DNN-based GANs
- URL: http://arxiv.org/abs/2211.14577v1
- Date: Sat, 26 Nov 2022 14:33:34 GMT
- Title: Distribution estimation and change-point detection for time series via
DNN-based GANs
- Authors: Jianya Lu, Yingjun Mo, Zhijie Xiao, Lihu Xu, Qiuran Yao
- Abstract summary: generative adversarial networks (GANs) have recently been applied to estimating the distribution of independent and identically distributed data.
In this paper, we use the blocking technique to demonstrate the effectiveness of GANs for estimating the distribution of stationary time series.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The generative adversarial networks (GANs) have recently been applied to
estimating the distribution of independent and
identically distributed data, and got excellent performances. In this paper,
we use the blocking technique to demonstrate the effectiveness of GANs for
estimating the distribution of stationary time series. Theoretically, we obtain
a non-asymptotic error bound for the Deep Neural Network (DNN)-based GANs
estimator for the stationary distribution of the time series. Based on our
theoretical analysis, we put forward an algorithm for detecting the
change-point in time series. We simulate in our first experiment a stationary
time series by the multivariate autoregressive model to test our GAN estimator,
while the second experiment is to use our proposed algorithm to detect the
change-point in a time series sequence. Both perform very well. The third
experiment is to use our GAN estimator to learn the distribution of a real
financial time series data, which is not stationary, we can see from the
experiment results that our estimator cannot match the distribution of the time
series very well but give the right changing tendency.
Related papers
- Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis [56.442307356162864]
We study the theoretical aspects of score-based discrete diffusion models under the Continuous Time Markov Chain (CTMC) framework.
We introduce a discrete-time sampling algorithm in the general state space $[S]d$ that utilizes score estimators at predefined time points.
Our convergence analysis employs a Girsanov-based method and establishes key properties of the discrete score function.
arXiv Detail & Related papers (2024-10-03T09:07:13Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Favour: FAst Variance Operator for Uncertainty Rating [0.034530027457862]
Bayesian Neural Networks (BNN) have emerged as a crucial approach for interpreting ML predictions.
By sampling from the posterior distribution, data scientists may estimate the uncertainty of an inference.
Previous work proposed propagating the first and second moments of the posterior directly through the network.
This method is even slower than sampling, so the propagated variance needs to be approximated.
Our contribution is a more principled variance propagation framework.
arXiv Detail & Related papers (2023-11-21T22:53:20Z) - Time-Parameterized Convolutional Neural Networks for Irregularly Sampled
Time Series [26.77596449192451]
Irregularly sampled time series are ubiquitous in several application domains, leading to sparse, not fully-observed and non-aligned observations.
Standard sequential neural networks (RNNs) and convolutional neural networks (CNNs) consider regular spacing between observation times, posing significant challenges to irregular time series modeling.
We parameterize convolutional layers by employing time-explicitly irregular kernels.
arXiv Detail & Related papers (2023-08-06T21:10:30Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Time Series Anomaly Detection by Cumulative Radon Features [32.36217153362305]
In this work, we argue that shallow features suffice when combined with distribution distance measures.
Our approach models each time series as a high dimensional empirical distribution of features, where each time-point constitutes a single sample.
We show that by parameterizing each time series using cumulative Radon features, we are able to efficiently and effectively model the distribution of normal time series.
arXiv Detail & Related papers (2022-02-08T18:58:53Z) - AdaRNN: Adaptive Learning and Forecasting of Time Series [39.63457842611036]
Time series has wide applications in the real world and is known to be difficult to forecast.
This paper proposes Adaptive RNNs (AdaRNN) to tackle the problem by building an adaptive model that generalizes well on unseen test data.
Experiments on human activity recognition, air quality prediction, and financial analysis show that AdaRNN outperforms the latest methods by a classification accuracy of 2.6% and significantly reduces the RMSE by 9.0%.
arXiv Detail & Related papers (2021-08-10T04:32:04Z) - Optimization Variance: Exploring Generalization Properties of DNNs [83.78477167211315]
The test error of a deep neural network (DNN) often demonstrates double descent.
We propose a novel metric, optimization variance (OV), to measure the diversity of model updates.
arXiv Detail & Related papers (2021-06-03T09:34:17Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z) - Neural Jump Ordinary Differential Equations: Consistent Continuous-Time
Prediction and Filtering [6.445605125467574]
We introduce the Neural Jump ODE (NJ-ODE) that provides a data-driven approach to learn, continuously in time.
We show that our model converges to the $L2$-optimal online prediction.
We experimentally show that our model outperforms the baselines in more complex learning tasks.
arXiv Detail & Related papers (2020-06-08T16:34:51Z) - Learned Factor Graphs for Inference from Stationary Time Sequences [107.63351413549992]
We propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences.
neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence.
We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data.
arXiv Detail & Related papers (2020-06-05T07:06:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.