A parsimonious, computationally efficient machine learning method for
spatial regression
- URL: http://arxiv.org/abs/2309.16448v1
- Date: Thu, 28 Sep 2023 13:57:36 GMT
- Title: A parsimonious, computationally efficient machine learning method for
spatial regression
- Authors: Milan \v{Z}ukovi\v{c} and Dionissios T. Hristopulos
- Abstract summary: We introduce the modified planar rotator method (MPRS), a physically inspired machine learning method for spatial/temporal regression.
MPRS is a non-parametric model which incorporates spatial or temporal correlations via short-range, distance-dependent interactions'' without assuming a specific form for the underlying probability distribution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the modified planar rotator method (MPRS), a physically inspired
machine learning method for spatial/temporal regression. MPRS is a
non-parametric model which incorporates spatial or temporal correlations via
short-range, distance-dependent ``interactions'' without assuming a specific
form for the underlying probability distribution. Predictions are obtained by
means of a fully autonomous learning algorithm which employs equilibrium
conditional Monte Carlo simulations. MPRS is able to handle scattered data and
arbitrary spatial dimensions. We report tests on various synthetic and
real-word data in one, two and three dimensions which demonstrate that the MPRS
prediction performance (without parameter tuning) is competitive with standard
interpolation methods such as ordinary kriging and inverse distance weighting.
In particular, MPRS is a particularly effective gap-filling method for rough
and non-Gaussian data (e.g., daily precipitation time series). MPRS shows
superior computational efficiency and scalability for large samples. Massive
data sets involving millions of nodes can be processed in a few seconds on a
standard personal computer.
Related papers
- Efficient Real-Time Adaptation of ROMs for Unsteady Flows Using Data Assimilation [7.958594167693376]
We propose an efficient retraining strategy for a parameterized Reduced Order Model (ROM)<n>The strategy attains accuracy comparable to full retraining while requiring only a fraction of the computational time.<n>We show that, for the dynamical system considered, the dominant source of error in out-of-sample forecasts stems from distortions of the latent manifold.
arXiv Detail & Related papers (2026-02-26T16:43:28Z) - Disordered Dynamics in High Dimensions: Connections to Random Matrices and Machine Learning [52.26396748560348]
We provide an overview of high dimensional dynamical systems driven by random matrices.<n>We focus on applications to simple models of learning and generalization in machine learning theory.
arXiv Detail & Related papers (2026-01-03T00:12:32Z) - Federated Learning with Reservoir State Analysis for Time Series Anomaly Detection [1.1557852082644076]
In federated learning, local model training by multiple clients and model integration by a server are repeated only through model parameter sharing.
We propose federated learning methods with reservoir state analysis to seek computational efficiency and data privacy protection simultaneously.
We evaluate the performance of IncFed MD-RS using benchmark datasets for time series anomaly detection.
arXiv Detail & Related papers (2025-02-08T20:00:23Z) - Parallel Simulation for Log-concave Sampling and Score-based Diffusion Models [55.07411490538404]
We propose a novel parallel sampling method that improves adaptive complexity dependence on dimension $d$.<n>Our approach builds on parallel simulation techniques from scientific computing.
arXiv Detail & Related papers (2024-12-10T11:50:46Z) - Amortized Bayesian Local Interpolation NetworK: Fast covariance parameter estimation for Gaussian Processes [0.04660328753262073]
We propose an Amortized Bayesian Local Interpolation NetworK for fast covariance parameter estimation.
The fast prediction time of these networks allows us to bypass the matrix inversion step, creating large computational speedups.
We show significant increases in computational efficiency over comparable scalable GP methodology.
arXiv Detail & Related papers (2024-11-10T01:26:16Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Learning Radio Environments by Differentiable Ray Tracing [56.40113938833999]
We introduce a novel gradient-based calibration method, complemented by differentiable parametrizations of material properties, scattering and antenna patterns.
We have validated our method using both synthetic data and real-world indoor channel measurements, employing a distributed multiple-input multiple-output (MIMO) channel sounder.
arXiv Detail & Related papers (2023-11-30T13:50:21Z) - One-Dimensional Deep Image Prior for Curve Fitting of S-Parameters from
Electromagnetic Solvers [57.441926088870325]
Deep Image Prior (DIP) is a technique that optimized the weights of a randomly-d convolutional neural network to fit a signal from noisy or under-determined measurements.
Relative to publicly available implementations of Vector Fitting (VF), our method shows superior performance on nearly all test examples.
arXiv Detail & Related papers (2023-06-06T20:28:37Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - Local approximate Gaussian process regression for data-driven
constitutive laws: Development and comparison with neural networks [0.0]
We show how to use local approximate process regression to predict stress outputs at particular strain space locations.
A modified Newton-Raphson approach is proposed to accommodate for the local nature of the laGPR approximation when solving the global structural problem in a FE setting.
arXiv Detail & Related papers (2021-05-07T14:49:28Z) - Fast covariance parameter estimation of spatial Gaussian process models
using neural networks [0.0]
We train NNs to take moderate size spatial fields or variograms as input and return the range and noise-to-signal covariance parameters.
Once trained, the NNs provide estimates with a similar accuracy compared to ML estimation and at a speedup by a factor of 100 or more.
This work can be easily extended to other, more complex, spatial problems and provides a proof-of-concept for this use of machine learning in computational statistics.
arXiv Detail & Related papers (2020-12-30T22:06:26Z) - DeepGMR: Learning Latent Gaussian Mixture Models for Registration [113.74060941036664]
Point cloud registration is a fundamental problem in 3D computer vision, graphics and robotics.
In this paper, we introduce Deep Gaussian Mixture Registration (DeepGMR), the first learning-based registration method.
Our proposed method shows favorable performance when compared with state-of-the-art geometry-based and learning-based registration methods.
arXiv Detail & Related papers (2020-08-20T17:25:16Z) - Scalable Hybrid HMM with Gaussian Process Emission for Sequential
Time-series Data Clustering [13.845932997326571]
Hidden Markov Model (HMM) combined with Gaussian Process (GP) emission can be effectively used to estimate the hidden state.
This paper proposes a scalable learning method for HMM-GPSM.
arXiv Detail & Related papers (2020-01-07T07:28:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.