Generation of non-stationary stochastic fields using Generative
Adversarial Networks with limited training data
- URL: http://arxiv.org/abs/2205.05469v1
- Date: Wed, 11 May 2022 13:09:47 GMT
- Title: Generation of non-stationary stochastic fields using Generative
Adversarial Networks with limited training data
- Authors: Alhasan Abdellatif, Ahmed H. Elsheikh, Daniel Busby, Philippe Berthet
- Abstract summary: In this work, we investigate the problem of training Generative Adversarial Networks (GANs) models against a dataset of geological channelized patterns.
The developed training method allowed for effective learning of the correlation between the spatial conditions.
Our models were able to generate geologically-plausible realizations beyond the training samples with a strong correlation with the target maps.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the context of generating geological facies conditioned on observed data,
samples corresponding to all possible conditions are not generally available in
the training set and hence the generation of these realizations depends primary
on the generalization capability of the trained generative model. The problem
becomes more complex when applied on non-stationary fields. In this work, we
investigate the problem of training Generative Adversarial Networks (GANs)
models against a dataset of geological channelized patterns that has a few
non-stationary spatial modes and examine the training and self-conditioning
settings that improve the generalization capability at new spatial modes that
were never seen in the given training set. The developed training method
allowed for effective learning of the correlation between the spatial
conditions (i.e. non-stationary maps) and the realizations implicitly without
using additional loss terms or solving a costly optimization problem at the
realization generation phase. Our models, trained on real and artificial
datasets were able to generate geologically-plausible realizations beyond the
training samples with a strong correlation with the target maps.
Related papers
- Distance-Preserving Spatial Representations in Genomic Data [0.0]
The spatial context of single-cell gene expression data is crucial for many downstream analyses, yet often remains inaccessible due to practical and technical limitations.
We propose a generic representation learning and transfer learning framework dp-VAE, capable of reconstructing the spatial coordinates associated with the provided gene expression data.
arXiv Detail & Related papers (2024-08-01T21:04:27Z) - Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - A Temporally Disentangled Contrastive Diffusion Model for Spatiotemporal Imputation [35.46631415365955]
We introduce a conditional diffusion framework called C$2$TSD, which incorporates disentangled temporal (trend and seasonality) representations as conditional information.
Our experiments on three real-world datasets demonstrate the superior performance of our approach compared to a number of state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-18T11:59:04Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - Generating unrepresented proportions of geological facies using
Generative Adversarial Networks [0.0]
We investigate the capacity of Generative Adversarial Networks (GANs) in interpolating and extrapolating facies proportions in a geological dataset.
Specifically, we design a conditional GANs model that can drive the generated facies toward new proportions not found in the training set.
The presented numerical experiments on images of binary and multiple facies showed good geological consistency as well as strong correlation with the target conditions.
arXiv Detail & Related papers (2022-03-17T22:38:45Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Partially Conditioned Generative Adversarial Networks [75.08725392017698]
Generative Adversarial Networks (GANs) let one synthesise artificial datasets by implicitly modelling the underlying probability distribution of a real-world training dataset.
With the introduction of Conditional GANs and their variants, these methods were extended to generating samples conditioned on ancillary information available for each sample within the dataset.
In this work, we argue that standard Conditional GANs are not suitable for such a task and propose a new Adversarial Network architecture and training strategy.
arXiv Detail & Related papers (2020-07-06T15:59:28Z) - Data-driven learning of robust nonlocal physics from high-fidelity
synthetic data [3.9181541460605116]
Key challenge to nonlocal models is the analytical complexity of deriving them from first principles, and frequently their use is justified a posteriori.
In this work we extract nonlocal models from data, circumventing these challenges and providing data-driven justification for the resulting model form.
arXiv Detail & Related papers (2020-05-17T22:53:14Z) - Domain segmentation and adjustment for generalized zero-shot learning [22.933463036413624]
In zero-shot learning, synthesizing unseen data with generative models has been the most popular method to address the imbalance of training data between seen and unseen classes.
We argue that synthesizing unseen data may not be an ideal approach for addressing the domain shift caused by the imbalance of the training data.
In this paper, we propose to realize the generalized zero-shot recognition in different domains.
arXiv Detail & Related papers (2020-02-01T15:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.