Scaling through abstractions -- high-performance vectorial wave
simulations for seismic inversion with Devito
- URL: http://arxiv.org/abs/2004.10519v1
- Date: Wed, 22 Apr 2020 12:20:07 GMT
- Title: Scaling through abstractions -- high-performance vectorial wave
simulations for seismic inversion with Devito
- Authors: Mathias Louboutin, Fabio Luporini, Philipp Witte, Rhodri Nelson,
George Bisbas, Jan Thorbecke, Felix J. Herrmann, and Gerard Gorman
- Abstract summary: Devito is an open-source Python project based on domain-specific language and compiler technology.
This article presents the generation and simulation of MPI-parallel propagators for the pseudo-acoustic wave-equation in tilted transverse isotropic media.
- Score: 0.6745502291821955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: [Devito] is an open-source Python project based on domain-specific language
and compiler technology. Driven by the requirements of rapid HPC applications
development in exploration seismology, the language and compiler have evolved
significantly since inception. Sophisticated boundary conditions, tensor
contractions, sparse operations and features such as staggered grids and
sub-domains are all supported; operators of essentially arbitrary complexity
can be generated. To accommodate this flexibility whilst ensuring performance,
data dependency analysis is utilized to schedule loops and detect
computational-properties such as parallelism. In this article, the generation
and simulation of MPI-parallel propagators (along with their adjoints) for the
pseudo-acoustic wave-equation in tilted transverse isotropic media and the
elastic wave-equation are presented. Simulations are carried out on industry
scale synthetic models in a HPC Cloud system and reach a performance of
28TFLOP/s, hence demonstrating Devito's suitability for production-grade
seismic inversion problems.
Related papers
- AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer [54.713778961605115]
Vision Transformer (ViT) has become one of the most prevailing fundamental backbone networks in the computer vision community.
We propose a novel non-uniform quantizer, dubbed the Adaptive Logarithm AdaLog (AdaLog) quantizer.
arXiv Detail & Related papers (2024-07-17T18:38:48Z) - Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators [12.165876595927452]
Universal Physics Transformers (UPTs) are efficient and unified learning paradigm for a wide range of problems.
UPTs operate without grid- or particle-based latent meshes, enabling flexibility across structures and particles.
We demonstrate diverse applicability and efficacy of UPTs in mesh-based fluid simulations, and steady-state Reynolds averaged Navier-Stokes simulations.
arXiv Detail & Related papers (2024-02-19T18:52:13Z) - InVAErt networks: a data-driven framework for model synthesis and
identifiability analysis [0.0]
inVAErt is a framework for data-driven analysis and synthesis of physical systems.
It uses a deterministic decoder to represent the forward and inverse maps, a normalizing flow to capture the probabilistic distribution of system outputs, and a variational encoder to learn a compact latent representation for the lack of bijectivity between inputs and outputs.
arXiv Detail & Related papers (2023-07-24T07:58:18Z) - Learning in latent spaces improves the predictive accuracy of deep
neural operators [0.0]
L-DeepONet is an extension of standard DeepONet, which leverages latent representations of high-dimensional PDE input and output functions identified with suitable autoencoders.
We show that L-DeepONet outperforms the standard approach in terms of both accuracy and computational efficiency across diverse time-dependent PDEs.
arXiv Detail & Related papers (2023-04-15T17:13:09Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - A Doubly Stochastic Simulator with Applications in Arrivals Modeling and
Simulation [8.808993671472349]
We propose a framework that integrates classical Monte Carlo simulators and Wasserstein generative adversarial networks to model, estimate, and simulate a broad class of arrival processes.
Classical Monte Carlo simulators have advantages at capturing interpretable "physics" of a Poisson object, whereas neural-network-based simulators have advantages at capturing less-interpretable complicated dependence within a high-dimensional distribution.
arXiv Detail & Related papers (2020-12-27T13:32:16Z) - Data Augmentation at the LHC through Analysis-specific Fast Simulation
with Deep Learning [4.666011151359189]
We present a fast simulation application based on a Deep Neural Network, designed to create large analysis-specific datasets.
We propose a novel fast-simulation workflow that starts from a large amount of generator-level events to deliver large analysis-specific samples.
arXiv Detail & Related papers (2020-10-05T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.