Positive definite nonparametric regression using an evolutionary
algorithm with application to covariance function estimation
- URL: http://arxiv.org/abs/2304.13168v1
- Date: Tue, 25 Apr 2023 22:01:14 GMT
- Title: Positive definite nonparametric regression using an evolutionary
algorithm with application to covariance function estimation
- Authors: Myeongjong Kang
- Abstract summary: We propose a novel nonparametric regression framework for estimating covariance functions of stationary processes.
Our method can impose positive definiteness, as well as isotropy and monotonicity, on the estimators.
Our method provides more reliable estimates for long-range dependence.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel nonparametric regression framework subject to the positive
definiteness constraint. It offers a highly modular approach for estimating
covariance functions of stationary processes. Our method can impose positive
definiteness, as well as isotropy and monotonicity, on the estimators, and its
hyperparameters can be decided using cross validation. We define our estimators
by taking integral transforms of kernel-based distribution surrogates. We then
use the iterated density estimation evolutionary algorithm, a variant of
estimation of distribution algorithms, to fit the estimators. We also extend
our method to estimate covariance functions for point-referenced data. Compared
to alternative approaches, our method provides more reliable estimates for
long-range dependence. Several numerical studies are performed to demonstrate
the efficacy and performance of our method. Also, we illustrate our method
using precipitation data from the Spatial Interpolation Comparison 97 project.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - A variational Bayes approach to debiased inference for low-dimensional parameters in high-dimensional linear regression [2.7498981662768536]
We propose a scalable variational Bayes method for statistical inference in sparse linear regression.
Our approach relies on assigning a mean-field approximation to the nuisance coordinates.
This requires only a preprocessing step and preserves the computational advantages of mean-field variational Bayes.
arXiv Detail & Related papers (2024-06-18T14:27:44Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Nonparametric Automatic Differentiation Variational Inference with
Spline Approximation [7.5620760132717795]
We develop a nonparametric approximation approach that enables flexible posterior approximation for distributions with complicated structures.
Compared with widely-used nonparametrical inference methods, the proposed method is easy to implement and adaptive to various data structures.
Experiments demonstrate the efficiency of the proposed method in approximating complex posterior distributions and improving the performance of generative models with incomplete data.
arXiv Detail & Related papers (2024-03-10T20:22:06Z) - On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates [5.13323375365494]
We provide theoretical guarantees for the convergence behaviour of diffusion-based generative models under strongly log-concave data.
Our class of functions used for score estimation is made of Lipschitz continuous functions avoiding any Lipschitzness assumption on the score function.
This approach yields the best known convergence rate for our sampling algorithm.
arXiv Detail & Related papers (2023-11-22T18:40:45Z) - Approximate Bayesian Computation Based on Maxima Weighted Isolation
Kernel Mapping [0.0]
The work tries to solve the problem of a precise evaluation of a parameter for this type of model.
The application of the branching processes model to cancer cell evolution has many difficulties like high dimensionality and the rare appearance of a result of interest.
arXiv Detail & Related papers (2022-01-30T07:11:57Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - Statistical Inference after Kernel Ridge Regression Imputation under
item nonresponse [0.76146285961466]
We consider a nonparametric approach to imputation using the kernel ridge regression technique and propose consistent variance estimation.
The proposed variance estimator is based on a linearization approach which employs the entropy method to estimate the density ratio.
arXiv Detail & Related papers (2021-01-29T20:46:33Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z) - Batch Stationary Distribution Estimation [98.18201132095066]
We consider the problem of approximating the stationary distribution of an ergodic Markov chain given a set of sampled transitions.
We propose a consistent estimator that is based on recovering a correction ratio function over the given data.
arXiv Detail & Related papers (2020-03-02T09:10:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.