Mitigation of Spatial Nonstationarity with Vision Transformers
- URL: http://arxiv.org/abs/2212.04633v1
- Date: Fri, 9 Dec 2022 02:16:05 GMT
- Title: Mitigation of Spatial Nonstationarity with Vision Transformers
- Authors: Lei Liu, Javier E. Santos, Ma\v{s}a Prodanovi\'c, and Michael J. Pyrcz
- Abstract summary: We show the impact of two common types of geostatistical spatial nonstationarity on deep learning model prediction performance.
We propose the mitigation of such impacts using self-attention (vision transformer) models.
- Score: 1.690637178959708
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spatial nonstationarity, the location variance of features' statistical
distributions, is ubiquitous in many natural settings. For example, in
geological reservoirs rock matrix porosity varies vertically due to
geomechanical compaction trends, in mineral deposits grades vary due to
sedimentation and concentration processes, in hydrology rainfall varies due to
the atmosphere and topography interactions, and in metallurgy crystalline
structures vary due to differential cooling. Conventional geostatistical
modeling workflows rely on the assumption of stationarity to be able to model
spatial features for the geostatistical inference. Nevertheless, this is often
not a realistic assumption when dealing with nonstationary spatial data and
this has motivated a variety of nonstationary spatial modeling workflows such
as trend and residual decomposition, cosimulation with secondary features, and
spatial segmentation and independent modeling over stationary subdomains. The
advent of deep learning technologies has enabled new workflows for modeling
spatial relationships. However, there is a paucity of demonstrated best
practice and general guidance on mitigation of spatial nonstationarity with
deep learning in the geospatial context. We demonstrate the impact of two
common types of geostatistical spatial nonstationarity on deep learning model
prediction performance and propose the mitigation of such impacts using
self-attention (vision transformer) models. We demonstrate the utility of
vision transformers for the mitigation of nonstationarity with relative errors
as low as 10%, exceeding the performance of alternative deep learning methods
such as convolutional neural networks. We establish best practice by
demonstrating the ability of self-attention networks for modeling large-scale
spatial relationships in the presence of commonly observed geospatial
nonstationarity.
Related papers
- A class of modular and flexible covariate-based covariance functions for nonstationary spatial modeling [0.0]
We present a class of covariance functions that relies on fixed, observable spatial information.
This model allows for separate structures for different sources of nonstationarity, such as marginal standard deviation, geometric anisotropy, and smoothness.
We analyze the capabilities of the presented model through simulation studies and an application to Swiss precipitation data.
arXiv Detail & Related papers (2024-10-22T05:53:25Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Deep autoregressive modeling for land use land cover [0.0]
Land use / land cover (LULC) modeling is a challenging task due to long-range dependencies between geographic features and distinct spatial patterns related to topography, ecology, and human development.
We identify a close connection between modeling of spatial patterns of land use and the task of image inpainting from computer vision and conduct a study of a modified PixelCNN architecture with approximately 19 million parameters for modeling LULC.
arXiv Detail & Related papers (2024-01-02T18:03:57Z) - Dynamic Kernel-Based Adaptive Spatial Aggregation for Learned Image
Compression [63.56922682378755]
We focus on extending spatial aggregation capability and propose a dynamic kernel-based transform coding.
The proposed adaptive aggregation generates kernel offsets to capture valid information in the content-conditioned range to help transform.
Experimental results demonstrate that our method achieves superior rate-distortion performance on three benchmarks compared to the state-of-the-art learning-based methods.
arXiv Detail & Related papers (2023-08-17T01:34:51Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Strategic Geosteeering Workflow with Uncertainty Quantification and Deep
Learning: A Case Study on the Goliat Field [0.0]
This paper presents a practical workflow consisting of offline and online phases.
The offline phase includes training and building of an uncertain prior near-well geo-model.
The online phase uses the flexible iterative ensemble smoother (FlexIES) to perform real-time assimilation of extra-deep electromagnetic data.
arXiv Detail & Related papers (2022-10-27T15:38:26Z) - Few Shot Generative Model Adaption via Relaxed Spatial Structural
Alignment [130.84010267004803]
Training a generative adversarial network (GAN) with limited data has been a challenging task.
A feasible solution is to start with a GAN well-trained on a large scale source domain and adapt it to the target domain with a few samples, termed as few shot generative model adaption.
We propose a relaxed spatial structural alignment method to calibrate the target generative models during the adaption.
arXiv Detail & Related papers (2022-03-06T14:26:25Z) - Deep-learning-based coupled flow-geomechanics surrogate model for CO$_2$
sequestration [4.635171370680939]
The 3D recurrent R-U-Net model combines deep convolutional and recurrent neural networks to capture the spatial distribution and temporal evolution of saturation, pressure and surface displacement fields.
The surrogate model is trained to predict the 3D CO2 saturation and pressure fields in the storage aquifer, and 2D displacement maps at the Earth's surface.
arXiv Detail & Related papers (2021-05-04T07:34:15Z) - Evidential Sparsification of Multimodal Latent Spaces in Conditional
Variational Autoencoders [63.46738617561255]
We consider the problem of sparsifying the discrete latent space of a trained conditional variational autoencoder.
We use evidential theory to identify the latent classes that receive direct evidence from a particular input condition and filter out those that do not.
Experiments on diverse tasks, such as image generation and human behavior prediction, demonstrate the effectiveness of our proposed technique.
arXiv Detail & Related papers (2020-10-19T01:27:21Z) - Semiparametric Bayesian Forecasting of Spatial Earthquake Occurrences [77.68028443709338]
We propose a fully Bayesian formulation of the Epidemic Type Aftershock Sequence (ETAS) model.
The occurrence of the mainshock earthquakes in a geographical region is assumed to follow an inhomogeneous spatial point process.
arXiv Detail & Related papers (2020-02-05T10:11:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.