Estimating Basis Functions in Massive Fields under the Spatial Mixed
Effects Model
- URL: http://arxiv.org/abs/2003.05990v1
- Date: Thu, 12 Mar 2020 19:36:40 GMT
- Title: Estimating Basis Functions in Massive Fields under the Spatial Mixed
Effects Model
- Authors: Karl T. Pazdernik and Ranjan Maitra
- Abstract summary: For massive datasets, fixed rank kriging using the Expectation-Maximization (EM) algorithm for estimation has been proposed as an alternative to the usual but computationally prohibitive kriging method.
We develop an alternative method that utilizes the Spatial Mixed Effects (SME) model, but allows for additional flexibility by estimating the range of the spatial dependence between the observations and the knots via an Alternating Expectation Conditional Maximization (AECM) algorithm.
Experiments show that our methodology improves estimation without sacrificing prediction accuracy while also minimizing the additional computational burden of extra parameter estimation.
- Score: 8.528384027684194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatial prediction is commonly achieved under the assumption of a Gaussian
random field (GRF) by obtaining maximum likelihood estimates of parameters, and
then using the kriging equations to arrive at predicted values. For massive
datasets, fixed rank kriging using the Expectation-Maximization (EM) algorithm
for estimation has been proposed as an alternative to the usual but
computationally prohibitive kriging method. The method reduces computation cost
of estimation by redefining the spatial process as a linear combination of
basis functions and spatial random effects. A disadvantage of this method is
that it imposes constraints on the relationship between the observed locations
and the knots. We develop an alternative method that utilizes the Spatial Mixed
Effects (SME) model, but allows for additional flexibility by estimating the
range of the spatial dependence between the observations and the knots via an
Alternating Expectation Conditional Maximization (AECM) algorithm. Experiments
show that our methodology improves estimation without sacrificing prediction
accuracy while also minimizing the additional computational burden of extra
parameter estimation. The methodology is applied to a temperature data set
archived by the United States National Climate Data Center, with improved
results over previous methodology.
Related papers
- Eliminating Ratio Bias for Gradient-based Simulated Parameter Estimation [0.7673339435080445]
This article addresses the challenge of parameter calibration in models where the likelihood function is not analytically available.
We propose a gradient-based simulated parameter estimation framework, leveraging a multi-time scale that tackles the issue of ratio bias in both maximum likelihood estimation and posterior density estimation problems.
arXiv Detail & Related papers (2024-11-20T02:46:15Z) - Amortized Bayesian Local Interpolation NetworK: Fast covariance parameter estimation for Gaussian Processes [0.04660328753262073]
We propose an Amortized Bayesian Local Interpolation NetworK for fast covariance parameter estimation.
The fast prediction time of these networks allows us to bypass the matrix inversion step, creating large computational speedups.
We show significant increases in computational efficiency over comparable scalable GP methodology.
arXiv Detail & Related papers (2024-11-10T01:26:16Z) - Iterative Methods for Full-Scale Gaussian Process Approximations for Large Spatial Data [9.913418444556486]
We show how iterative methods can be used to reduce the computational costs for calculating likelihoods, gradients, and predictive distributions with FSAs.
We also present a novel, accurate, and fast way to calculate predictive variances relying on estimations and iterative methods.
All methods are implemented in a free C++ software library with high-level Python and R packages.
arXiv Detail & Related papers (2024-05-23T12:25:22Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Interacting Particle Langevin Algorithm for Maximum Marginal Likelihood
Estimation [2.53740603524637]
We develop a class of interacting particle systems for implementing a maximum marginal likelihood estimation procedure.
In particular, we prove that the parameter marginal of the stationary measure of this diffusion has the form of a Gibbs measure.
Using a particular rescaling, we then prove geometric ergodicity of this system and bound the discretisation error.
in a manner that is uniform in time and does not increase with the number of particles.
arXiv Detail & Related papers (2023-03-23T16:50:08Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Adaptive LASSO estimation for functional hidden dynamic geostatistical
model [69.10717733870575]
We propose a novel model selection algorithm based on a penalized maximum likelihood estimator (PMLE) for functional hiddenstatistical models (f-HD)
The algorithm is based on iterative optimisation and uses an adaptive least absolute shrinkage and selector operator (GMSOLAS) penalty function, wherein the weights are obtained by the unpenalised f-HD maximum-likelihood estimators.
arXiv Detail & Related papers (2022-08-10T19:17:45Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - The Shooting Regressor; Randomized Gradient-Based Ensembles [0.0]
An ensemble method is introduced that utilizes randomization and loss function gradients to compute a prediction.
Multiple weakly-correlated estimators approximate the gradient at randomly sampled points on the error surface and are aggregated into a final solution.
arXiv Detail & Related papers (2020-09-14T03:20:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.