Sig-Splines: universal approximation and convex calibration of time
series generative models
- URL: http://arxiv.org/abs/2307.09767v1
- Date: Wed, 19 Jul 2023 05:58:21 GMT
- Title: Sig-Splines: universal approximation and convex calibration of time
series generative models
- Authors: Magnus Wiese, Phillip Murray, Ralf Korn
- Abstract summary: Our algorithm incorporates linear transformations and the signature transform as a seamless substitution for traditional neural networks.
This approach enables us to achieve not only the universality property inherent in neural networks but also introduces convexity in the model's parameters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel generative model for multivariate discrete-time time
series data. Drawing inspiration from the construction of neural spline flows,
our algorithm incorporates linear transformations and the signature transform
as a seamless substitution for traditional neural networks. This approach
enables us to achieve not only the universality property inherent in neural
networks but also introduces convexity in the model's parameters.
Related papers
- A Temporal Linear Network for Time Series Forecasting [0.0]
We introduce the Temporal Linear Net (TLN), that extends the capabilities of linear models while maintaining interpretability and computational efficiency.
Our approach is a variant of TSMixer that maintains strict linearity throughout its architecture.
A key innovation of TLN is its ability to compute an equivalent linear model, offering a level of interpretability not found in more complex architectures such as TSMixer.
arXiv Detail & Related papers (2024-10-28T18:51:19Z) - Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Stochastic Recurrent Neural Network for Multistep Time Series
Forecasting [0.0]
We leverage advances in deep generative models and the concept of state space models to propose an adaptation of the recurrent neural network for time series forecasting.
Our model preserves the architectural workings of a recurrent neural network for which all relevant information is encapsulated in its hidden states, and this flexibility allows our model to be easily integrated into any deep architecture for sequential modelling.
arXiv Detail & Related papers (2021-04-26T01:43:43Z) - Dynamic Gaussian Mixture based Deep Generative Model For Robust
Forecasting on Sparse Multivariate Time Series [43.86737761236125]
We propose a novel generative model, which tracks the transition of latent clusters, instead of isolated feature representations.
It is characterized by a newly designed dynamic Gaussian mixture distribution, which captures the dynamics of clustering structures.
A structured inference network is also designed for enabling inductive analysis.
arXiv Detail & Related papers (2021-03-03T04:10:07Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Physical invariance in neural networks for subgrid-scale scalar flux
modeling [5.333802479607541]
We present a new strategy to model the subgrid-scale scalar flux in a three-dimensional turbulent incompressible flow using physics-informed neural networks (NNs)
We show that the proposed transformation-invariant NN model outperforms both purely data-driven ones and parametric state-of-the-art subgrid-scale models.
arXiv Detail & Related papers (2020-10-09T16:09:54Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Research on a New Convolutional Neural Network Model Combined with
Random Edges Adding [10.519799195357209]
A random edge adding algorithm is proposed to improve the performance of convolutional neural network model.
The simulation results show that the model recognition accuracy and training convergence speed are greatly improved by random edge adding reconstructed models.
arXiv Detail & Related papers (2020-03-17T16:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.