Efficient CNN-LSTM based Parameter Estimation of Levy Driven Stochastic
Differential Equations
- URL: http://arxiv.org/abs/2403.04246v1
- Date: Thu, 7 Mar 2024 06:07:31 GMT
- Title: Efficient CNN-LSTM based Parameter Estimation of Levy Driven Stochastic
Differential Equations
- Authors: Shuaiyu Li, Yang Ruan, Changzhou Long, Yuzhong Cheng
- Abstract summary: This study addresses the challenges in parameter estimation of differential equations driven by non-Gaussian noises.
Previous research highlighted the potential of LSTM networks in estimating parameters of alpha stable Levy driven SDEs.
We introduce the PEnet, a novel CNN-LSTM-based three-stage model that offers an end to end approach with superior accuracy and adaptability to varying data structures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study addresses the challenges in parameter estimation of stochastic
differential equations driven by non-Gaussian noises, which are critical in
understanding dynamic phenomena such as price fluctuations and the spread of
infectious diseases. Previous research highlighted the potential of LSTM
networks in estimating parameters of alpha stable Levy driven SDEs but faced
limitations including high time complexity and constraints of the LSTM chaining
property. To mitigate these issues, we introduce the PEnet, a novel
CNN-LSTM-based three-stage model that offers an end to end approach with
superior accuracy and adaptability to varying data structures, enhanced
inference speed for long sequence observations through initial data feature
condensation by CNN, and high generalization capability, allowing its
application to various complex SDE scenarios. Experiments on synthetic datasets
confirm PEnet significant advantage in estimating SDE parameters associated
with noise characteristics, establishing it as a competitive method for SDE
parameter estimation in the presence of Levy noise.
Related papers
- Trajectory Flow Matching with Applications to Clinical Time Series Modeling [77.58277281319253]
Trajectory Flow Matching (TFM) trains a Neural SDE in a simulation-free manner, bypassing backpropagation through the dynamics.
We demonstrate improved performance on three clinical time series datasets in terms of absolute performance and uncertainty prediction.
arXiv Detail & Related papers (2024-10-28T15:54:50Z) - Neural McKean-Vlasov Processes: Distributional Dependence in Diffusion Processes [24.24785205800212]
McKean-Vlasov differential equations (MV-SDEs) provide a mathematical description of the behavior of an infinite number of interacting particles.
We study the influence of explicitly including distributional information in the parameterization of the SDE.
arXiv Detail & Related papers (2024-04-15T01:28:16Z) - Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data [3.686808512438363]
Irregular sampling intervals and missing values in real-world time series data present challenges for conventional methods.
We propose three stable classes of Neural SDEs: Langevin-type SDE, Linear Noise SDE, and Geometric SDE.
Our results demonstrate the efficacy of the proposed method in handling real-world irregular time series data.
arXiv Detail & Related papers (2024-02-22T22:00:03Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Switching Autoregressive Low-rank Tensor Models [12.461139675114818]
We show how to switch autoregressive low-rank tensor (SALT) models.
SALT parameterizes the tensor of an ARHMM with a low-rank factorization to control the number of parameters.
We prove theoretical and discuss practical connections between SALT, linear dynamical systems, and SLDSs.
arXiv Detail & Related papers (2023-06-05T22:25:28Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - LSTM based models stability in the context of Sentiment Analysis for
social media [0.0]
We present various LSTM models and their key parameters.
We perform experiments to test the stability of these models in the context of Sentiment Analysis.
arXiv Detail & Related papers (2022-11-21T08:31:30Z) - Reduced order modeling of parametrized systems through autoencoders and
SINDy approach: continuation of periodic solutions [0.0]
This work presents a data-driven, non-intrusive framework which combines ROM construction with reduced dynamics identification.
The proposed approach leverages autoencoder neural networks with parametric sparse identification of nonlinear dynamics (SINDy) to construct a low-dimensional dynamical model.
These aim at tracking the evolution of periodic steady-state responses as functions of system parameters, avoiding the computation of the transient phase, and allowing to detect instabilities and bifurcations.
arXiv Detail & Related papers (2022-11-13T01:57:18Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Accurate and Reliable Forecasting using Stochastic Differential
Equations [48.21369419647511]
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments.
This paper develops SDE-HNN to characterize the interaction between the predictive mean and variance of HNNs for accurate and reliable regression.
Experiments on the challenging datasets show that our method significantly outperforms the state-of-the-art baselines in terms of both predictive performance and uncertainty quantification.
arXiv Detail & Related papers (2021-03-28T04:18:11Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.