SVInvNet: A Densely Connected Encoder-Decoder Architecture for Seismic Velocity Inversion
- URL: http://arxiv.org/abs/2312.08194v2
- Date: Tue, 01 Apr 2025 12:44:26 GMT
- Title: SVInvNet: A Densely Connected Encoder-Decoder Architecture for Seismic Velocity Inversion
- Authors: Mojtaba Najafi Khatounabad, Hacer Yalim Keles, Selma Kadioglu,
- Abstract summary: This study presents a deep learning-based approach to seismic velocity inversion problem, focusing on both noisy and noiseless training datasets of varying sizes.<n>Our Seismic Velocity Inversion Network (SVInvNet) introduces a novel architecture that contains a multi-connection encoder-decoder structure enhanced with dense blocks.<n>For training and testing, we created diverse seismic velocity models, including multi-layered, faulty and inversion salt dome categories.
- Score: 1.8024397171920885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents a deep learning-based approach to seismic velocity inversion problem, focusing on both noisy and noiseless training datasets of varying sizes. Our Seismic Velocity Inversion Network (SVInvNet) introduces a novel architecture that contains a multi-connection encoder-decoder structure enhanced with dense blocks. This design is specifically tuned to effectively process time series data, which is essential for addressing the challenges of non-linear seismic velocity inversion. For training and testing, we created diverse seismic velocity models, including multi-layered, faulty, and salt dome categories. We also investigated how different kinds of ambient noise, both coherent and stochastic, and the size of the training dataset affect learning outcomes. SVInvNet is trained on datasets ranging from 750 to 6,000 samples and is tested using a large benchmark dataset of 12,000 samples. Despite its fewer parameters compared to the baseline model, SVInvNet achieves superior performance with this dataset. The performance of SVInvNet was further evaluated using the OpenFWI dataset and Marmousi-derived velocity models. The comparative analysis clearly reveals the effectiveness of the proposed model.
Related papers
- InJecteD: Analyzing Trajectories and Drift Dynamics in Denoising Diffusion Probabilistic Models for 2D Point Cloud Generation [48.55037712252843]
InJecteD is a framework for interpreting Denoising Diffusion Probabilistic Models (DDPMs)<n>We apply this framework to three datasets from the Datasaurus Dozen bullseye, dino, and circle.<n>Our approach quantifies trajectory properties, including displacement, velocity, clustering, and drift field dynamics.
arXiv Detail & Related papers (2025-09-09T14:53:19Z) - Seismic Velocity Inversion from Multi-Source Shot Gathers Using Deep Segmentation Networks: Benchmarking U-Net Variants and SeismoLabV3+ [0.0]
This research benchmarks three advanced encoder-decoder architectures -- U-Net, U-Net++, and DeepLabV3+ -- together with SeismoLabV3+.<n>Results show that SeismoLabV3+ achieves the best performance, with MAPE values of 0.03025 on the internal validation split and 0.031246 on the hidden test set.
arXiv Detail & Related papers (2025-09-07T14:41:39Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - A convolutional neural network approach to deblending seismic data [1.5488464287814563]
We present a data-driven deep learning-based method for fast and efficient seismic deblending.
A convolutional neural network (CNN) is designed according to the special character of seismic data.
After training and validation of the network, seismic deblending can be performed in near real time.
arXiv Detail & Related papers (2024-09-12T10:54:35Z) - A Lightweight Measure of Classification Difficulty from Application Dataset Characteristics [4.220363193932374]
We propose an efficient cosine similarity-based classification difficulty measure S.
It is calculated from the number of classes and intra- and inter-class similarity metrics of the dataset.
We show how a practitioner can use this measure to help select an efficient model 6 to 29x faster than through repeated training and testing.
arXiv Detail & Related papers (2024-04-09T03:27:09Z) - Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution [62.71425232332837]
We show that training amortized models with noisy labels is inexpensive and surprisingly effective.
This approach significantly accelerates several feature attribution and data valuation methods, often yielding an order of magnitude speedup over existing approaches.
arXiv Detail & Related papers (2024-01-29T03:42:37Z) - Data Augmentations in Deep Weight Spaces [89.45272760013928]
We introduce a novel augmentation scheme based on the Mixup method.
We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate.
arXiv Detail & Related papers (2023-11-15T10:43:13Z) - Towards Robust Dataset Learning [90.2590325441068]
We propose a principled, tri-level optimization to formulate the robust dataset learning problem.
Under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset.
arXiv Detail & Related papers (2022-11-19T17:06:10Z) - Hybridization of Capsule and LSTM Networks for unsupervised anomaly
detection on multivariate data [0.0]
This paper introduces a novel NN architecture which hybridises the Long-Short-Term-Memory (LSTM) and Capsule Networks into a single network.
The proposed method uses an unsupervised learning technique to overcome the issues with finding large volumes of labelled training data.
arXiv Detail & Related papers (2022-02-11T10:33:53Z) - Unsupervised Learning of Full-Waveform Inversion: Connecting CNN and
Partial Differential Equation in a Loop [13.1144828613672]
Full-Waveform Inversion (FWI) has been widely used in geophysics to estimate subsurface velocity maps from seismic data.
We introduce a new large-scale dataset OpenFWI, to establish a more challenging benchmark for the community.
Experiment results show that our model (using seismic data alone) yields comparable accuracy to the supervised counterpart.
arXiv Detail & Related papers (2021-10-14T17:47:22Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - An Experimental Review on Deep Learning Architectures for Time Series
Forecasting [0.0]
We provide the most extensive deep learning study for time series forecasting.
Among all studied models, the results show that long short-term memory (LSTM) and convolutional networks (CNN) are the best alternatives.
CNNs achieve comparable performance with less variability of results under different parameter configurations, while also being more efficient.
arXiv Detail & Related papers (2021-03-22T17:58:36Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Statistical model-based evaluation of neural networks [74.10854783437351]
We develop an experimental setup for the evaluation of neural networks (NNs)
The setup helps to benchmark a set of NNs vis-a-vis minimum-mean-square-error (MMSE) performance bounds.
This allows us to test the effects of training data size, data dimension, data geometry, noise, and mismatch between training and testing conditions.
arXiv Detail & Related papers (2020-11-18T00:33:24Z) - Prediction of Object Geometry from Acoustic Scattering Using
Convolutional Neural Networks [8.067201256886733]
The present work proposes to infer object geometry from scattering features by training convolutional neural networks.
The robustness of our approach in response to data degradation is evaluated by comparing the performance of networks trained using the datasets.
arXiv Detail & Related papers (2020-10-21T00:51:14Z) - On the performance of deep learning models for time series
classification in streaming [0.0]
This work is to assess the performance of different types of deep architectures for data streaming classification.
We evaluate models such as multi-layer perceptrons, recurrent, convolutional and temporal convolutional neural networks over several time-series datasets.
arXiv Detail & Related papers (2020-03-05T11:41:29Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z) - SeismiQB -- a novel framework for deep learning with seismic data [62.997667081978825]
We've developed an open-sourced Python framework with emphasis on working with neural networks.
It provides convenient tools for fast loading seismic cubes in multiple data formats.
It also generates crops of desired shape and augmenting them with various transformations.
arXiv Detail & Related papers (2020-01-10T10:45:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.