Adaptive Student's t-distribution with method of moments moving
estimator for nonstationary time series
- URL: http://arxiv.org/abs/2304.03069v2
- Date: Wed, 12 Apr 2023 14:12:45 GMT
- Title: Adaptive Student's t-distribution with method of moments moving
estimator for nonstationary time series
- Authors: Jarek Duda
- Abstract summary: We will focus on recently proposed philosophy of moving estimator.
$F_t=sum_taut (1-eta)t-tau ln(rho_theta (x_tau))$ moving log-likelihood, evolving in time.
Student's t-distribution, popular especially in economical applications, here applied to log-returns of DJIA companies.
- Score: 0.8702432681310399
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The real life time series are usually nonstationary, bringing a difficult
question of model adaptation. Classical approaches like ARMA-ARCH assume
arbitrary type of dependence. To avoid such bias, we will focus on recently
proposed agnostic philosophy of moving estimator: in time $t$ finding
parameters optimizing e.g. $F_t=\sum_{\tau<t} (1-\eta)^{t-\tau} \ln(\rho_\theta
(x_\tau))$ moving log-likelihood, evolving in time. It allows for example to
estimate parameters using inexpensive exponential moving averages (EMA), like
absolute central moments $E[|x-\mu|^p]$ evolving for one or multiple powers
$p\in\mathbb{R}^+$ using $m_{p,t+1} = m_{p,t} + \eta (|x_t-\mu_t|^p-m_{p,t})$.
Application of such general adaptive methods of moments will be presented on
Student's t-distribution, popular especially in economical applications, here
applied to log-returns of DJIA companies. While standard ARMA-ARCH approaches
provide evolution of $\mu$ and $\sigma$, here we also get evolution of $\nu$
describing $\rho(x)\sim |x|^{-\nu-1}$ tail shape, probability of extreme events
- which might turn out catastrophic, destabilizing the market.
Related papers
- Using Linearized Optimal Transport to Predict the Evolution of Stochastic Particle Systems [42.49693678817552]
We develop an algorithm to approximate the time evolution of a probability measure without explicitly learning an operator that governs the evolution.
A particular application of interest is discrete measures $mu_tN$ that arise from particle systems.
arXiv Detail & Related papers (2024-08-03T20:00:36Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - A Fast Algorithm for Adaptive Private Mean Estimation [5.090363690988394]
We design an $(varepsilon, delta)$-differentially private algorithm that is adaptive to $Sigma$.
The estimator achieves optimal rates of convergence with respect to the induced Mahalanobis norm $||cdot||_Sigma$.
arXiv Detail & Related papers (2023-01-17T18:44:41Z) - Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum
Minimization [52.25843977506935]
We propose an adaptive variance method, called AdaSpider, for $L$-smooth, non-reduction functions with a finitesum structure.
In doing so, we are able to compute an $epsilon-stationary point with $tildeOleft + st/epsilon calls.
arXiv Detail & Related papers (2022-11-03T14:41:46Z) - Reward-Mixing MDPs with a Few Latent Contexts are Learnable [75.17357040707347]
We consider episodic reinforcement learning in reward-mixing Markov decision processes (RMMDPs)
Our goal is to learn a near-optimal policy that nearly maximizes the $H$ time-step cumulative rewards in such a model.
arXiv Detail & Related papers (2022-10-05T22:52:00Z) - $p$-Generalized Probit Regression and Scalable Maximum Likelihood
Estimation via Sketching and Coresets [74.37849422071206]
We study the $p$-generalized probit regression model, which is a generalized linear model for binary responses.
We show how the maximum likelihood estimator for $p$-generalized probit regression can be approximated efficiently up to a factor of $(1+varepsilon)$ on large data.
arXiv Detail & Related papers (2022-03-25T10:54:41Z) - Iterative Feature Matching: Toward Provable Domain Generalization with
Logarithmic Environments [55.24895403089543]
Domain generalization aims at performing well on unseen test environments with data from a limited number of training environments.
We present a new algorithm based on performing iterative feature matching that is guaranteed with high probability to yield a predictor that generalizes after seeing only $O(logd_s)$ environments.
arXiv Detail & Related papers (2021-06-18T04:39:19Z) - Total Stability of SVMs and Localized SVMs [0.0]
Regularized kernel-based methods such as support vector machines (SVMs) depend on the underlying probability measure $mathrmP$.
The present paper investigates the influence of simultaneous slight variations in the whole triple $(mathrmP,lambda,k)$ on the resulting predictor.
arXiv Detail & Related papers (2021-01-29T16:44:14Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z) - Adaptive exponential power distribution with moving estimator for
nonstationary time series [0.8702432681310399]
We will focus on maximum likelihood (ML) adaptive estimation for nonstationary time series.
We focus on such example: $rho(x)propto exp(-|(x-mu)/sigma|kappa/kappa)$ exponential power distribution (EPD) family.
It is tested on daily log-return series for DJIA companies, leading to essentially better log-likelihoods than standard (static) estimation.
arXiv Detail & Related papers (2020-03-04T15:56:44Z) - Does generalization performance of $l^q$ regularization learning depend
on $q$? A negative example [19.945160684285003]
$lq$-regularization has been demonstrated to be an attractive technique in machine learning and statistical modeling.
We show that all $lq$ estimators for $0 infty$ attain similar generalization error bounds.
This finding tentatively reveals that, in some modeling contexts, the choice of $q$ might not have a strong impact in terms of the generalization capability.
arXiv Detail & Related papers (2013-07-25T00:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.