Adaptive stable distribution and Hurst exponent by method of moments moving estimator for nonstationary time series
- URL: http://arxiv.org/abs/2506.05354v1
- Date: Tue, 20 May 2025 08:36:49 GMT
- Title: Adaptive stable distribution and Hurst exponent by method of moments moving estimator for nonstationary time series
- Authors: Jarek Duda,
- Abstract summary: We will focus on novel more agnostic approach: moving estimator, which estimates parameters separately for every time.<n>We will show its applications for alpha-Stable distribution, which also influences Hurst exponent, hence can be used for its adaptive estimation.
- Score: 0.49728186750345144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonstationarity of real-life time series requires model adaptation. In classical approaches like ARMA-ARCH there is assumed some arbitrarily chosen dependence type. To avoid their bias, we will focus on novel more agnostic approach: moving estimator, which estimates parameters separately for every time $t$: optimizing $F_t=\sum_{\tau<t} (1-\eta)^{t-\tau} \ln(\rho_\theta (x_\tau))$ local log-likelihood with exponentially weakening weights of the old values. In practice such moving estimates can be found by EMA (exponential moving average) of some parameters, like $m_p=E[|x-\mu|^p]$ absolute central moments, updated by $m_{p,t+1} = m_{p,t} + \eta (|x_t-\mu_t|^p-m_{p,t})$. We will focus here on its applications for alpha-Stable distribution, which also influences Hurst exponent, hence can be used for its adaptive estimation. Its application will be shown on financial data as DJIA time series - beside standard estimation of evolution of center $\mu$ and scale parameter $\sigma$, there is also estimated evolution of $\alpha$ parameter allowing to continuously evaluate market stability - tails having $\rho(x) \sim 1/|x|^{\alpha+1}$ behavior, controlling probability of potentially dangerous extreme events.
Related papers
- Unbiased least squares regression via averaged stochastic gradient descent [0.0]
We consider an on-line least squares regression problem with optimal solution $theta*$ and Hessian matrix H.
For $kge2$, we provide an unbiased estimator of $theta*$ that is a modification of the time-average estimator.
arXiv Detail & Related papers (2024-06-26T11:39:22Z) - $L^1$ Estimation: On the Optimality of Linear Estimators [64.76492306585168]
This work shows that the only prior distribution on $X$ that induces linearity in the conditional median is Gaussian.
In particular, it is demonstrated that if the conditional distribution $P_X|Y=y$ is symmetric for all $y$, then $X$ must follow a Gaussian distribution.
arXiv Detail & Related papers (2023-09-17T01:45:13Z) - Adaptive Student's t-distribution with method of moments moving estimator for nonstationary time series [0.49728186750345144]
We will focus on recently proposed philosophy of moving estimator.<n>It allows for example to estimate parameters using inexpensive exponential moving averages.<n>It also provides evolution of $nu$ describing $rho(x)sim |x|-nu-1$ tail shape, probability of extreme events.
arXiv Detail & Related papers (2023-04-06T13:37:27Z) - Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum
Minimization [52.25843977506935]
We propose an adaptive variance method, called AdaSpider, for $L$-smooth, non-reduction functions with a finitesum structure.
In doing so, we are able to compute an $epsilon-stationary point with $tildeOleft + st/epsilon calls.
arXiv Detail & Related papers (2022-11-03T14:41:46Z) - Statistical Inference of Constrained Stochastic Optimization via Sketched Sequential Quadratic Programming [53.63469275932989]
We consider online statistical inference of constrained nonlinear optimization problems.<n>We apply the Sequential Quadratic Programming (StoSQP) method to solve these problems.
arXiv Detail & Related papers (2022-05-27T00:34:03Z) - $p$-Generalized Probit Regression and Scalable Maximum Likelihood
Estimation via Sketching and Coresets [74.37849422071206]
We study the $p$-generalized probit regression model, which is a generalized linear model for binary responses.
We show how the maximum likelihood estimator for $p$-generalized probit regression can be approximated efficiently up to a factor of $(1+varepsilon)$ on large data.
arXiv Detail & Related papers (2022-03-25T10:54:41Z) - Optimal and instance-dependent guarantees for Markovian linear stochastic approximation [47.912511426974376]
We show a non-asymptotic bound of the order $t_mathrmmix tfracdn$ on the squared error of the last iterate of a standard scheme.
We derive corollaries of these results for policy evaluation with Markov noise.
arXiv Detail & Related papers (2021-12-23T18:47:50Z) - Asynchronous Stochastic Optimization Robust to Arbitrary Delays [54.61797739710608]
We consider optimization with delayed gradients where, at each time stept$, the algorithm makes an update using a stale computation - d_t$ for arbitrary delay $d_t gradient.
Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
arXiv Detail & Related papers (2021-06-22T15:50:45Z) - SLOE: A Faster Method for Statistical Inference in High-Dimensional
Logistic Regression [68.66245730450915]
We develop an improved method for debiasing predictions and estimating frequentist uncertainty for practical datasets.
Our main contribution is SLOE, an estimator of the signal strength with convergence guarantees that reduces the computation time of estimation and inference by orders of magnitude.
arXiv Detail & Related papers (2021-03-23T17:48:56Z) - Optimal Mean Estimation without a Variance [103.26777953032537]
We study the problem of heavy-tailed mean estimation in settings where the variance of the data-generating distribution does not exist.
We design an estimator which attains the smallest possible confidence interval as a function of $n,d,delta$.
arXiv Detail & Related papers (2020-11-24T22:39:21Z) - Estimation in Tensor Ising Models [5.161531917413708]
We consider the problem of estimating the natural parameter of the $p$-tensor Ising model given a single sample from the distribution on $N$ nodes.
In particular, we show the $sqrt N$-consistency of the MPL estimate in the $p$-spin Sherrington-Kirkpatrick (SK) model.
We derive the precise fluctuations of the MPL estimate in the special case of the $p$-tensor Curie-Weiss model.
arXiv Detail & Related papers (2020-08-29T00:06:58Z) - Inference on the change point in high dimensional time series models via
plug in least squares [2.7718973516070684]
We study a plug in least squares estimator for the change point parameter where change is in the mean of a high dimensional random vector.
We obtain sufficient conditions under which this estimator possesses sufficient adaptivity against plug in estimates of mean parameters.
arXiv Detail & Related papers (2020-07-03T18:08:12Z) - Adaptive exponential power distribution with moving estimator for
nonstationary time series [0.8702432681310399]
We will focus on maximum likelihood (ML) adaptive estimation for nonstationary time series.
We focus on such example: $rho(x)propto exp(-|(x-mu)/sigma|kappa/kappa)$ exponential power distribution (EPD) family.
It is tested on daily log-return series for DJIA companies, leading to essentially better log-likelihoods than standard (static) estimation.
arXiv Detail & Related papers (2020-03-04T15:56:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.