Maximum Likelihood With a Time Varying Parameter
- URL: http://arxiv.org/abs/2302.14529v1
- Date: Tue, 28 Feb 2023 12:40:42 GMT
- Title: Maximum Likelihood With a Time Varying Parameter
- Authors: Alberto Lanconelli and Christopher S. A. Lauria
- Abstract summary: We consider the problem of tracking an unknown time varying parameter that characterizes the evolution of a sequence of independent observations.
We propose a gradient descent-based scheme in which the log-likelihood of the observations acts as time varying gain function.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of tracking an unknown time varying parameter that
characterizes the probabilistic evolution of a sequence of independent
observations. To this aim, we propose a stochastic gradient descent-based
recursive scheme in which the log-likelihood of the observations acts as time
varying gain function. We prove convergence in mean-square error in a suitable
neighbourhood of the unknown time varying parameter and illustrate the details
of our findings in the case where data are generated from distributions
belonging to the exponential family.
Related papers
- Convergence of the denoising diffusion probabilistic models [0.0]
We analyze the original version of the denoising diffusion probabilistic models (DDPMs) presented in Ho, J., Jain, A., and Abbeel, P.
Our main theorem states that the sequence constructed by the original DDPM sampling algorithm weakly converges to a given data distribution as the number of time steps goes to infinity.
arXiv Detail & Related papers (2024-06-03T13:38:18Z) - Ultimate limit on learning non-Markovian behavior: Fisher information
rate and excess information [0.0]
We address the fundamental limits of learning unknown parameters of any process from time-series data.
We discover exact closed-form expressions for how optimal inference scales with observation length.
arXiv Detail & Related papers (2023-10-06T01:53:42Z) - Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent
Observation Framework [6.404122934568861]
We introduce a new loss function, which allows us to deal with noisy observations and explain why the previously used loss function did not lead to a consistent estimator.
arXiv Detail & Related papers (2023-07-24T22:01:22Z) - Causal inference for the expected number of recurrent events in the
presence of a terminal event [0.0]
We study causal inference and efficient estimation for the expected number of recurrent events in the presence of a terminal event.
No absolute continuity assumption is made on the underlying probability distributions of failure, censoring, or the observed data.
arXiv Detail & Related papers (2023-06-28T21:31:25Z) - Probabilistic Learning of Multivariate Time Series with Temporal
Irregularity [25.91078012394032]
temporal irregularities, including nonuniform time intervals and component misalignment.
We develop a conditional flow representation to non-parametrically represent the data distribution, which is typically non-Gaussian.
The broad applicability and superiority of the proposed solution are confirmed by comparing it with existing approaches through ablation studies and testing on real-world datasets.
arXiv Detail & Related papers (2023-06-15T14:08:48Z) - Continuous-Time Modeling of Counterfactual Outcomes Using Neural
Controlled Differential Equations [84.42837346400151]
Estimating counterfactual outcomes over time has the potential to unlock personalized healthcare.
Existing causal inference approaches consider regular, discrete-time intervals between observations and treatment decisions.
We propose a controllable simulation environment based on a model of tumor growth for a range of scenarios.
arXiv Detail & Related papers (2022-06-16T17:15:15Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - Learning Interpretable Deep State Space Model for Probabilistic Time
Series Forecasting [98.57851612518758]
Probabilistic time series forecasting involves estimating the distribution of future based on its history.
We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks.
We show in experiments that our model produces accurate and sharp probabilistic forecasts.
arXiv Detail & Related papers (2021-01-31T06:49:33Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Batch Stationary Distribution Estimation [98.18201132095066]
We consider the problem of approximating the stationary distribution of an ergodic Markov chain given a set of sampled transitions.
We propose a consistent estimator that is based on recovering a correction ratio function over the given data.
arXiv Detail & Related papers (2020-03-02T09:10:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.