Semi-supervised Impedance Inversion by Bayesian Neural Network Based on
2-d CNN Pre-training
- URL: http://arxiv.org/abs/2111.10596v1
- Date: Sat, 20 Nov 2021 14:12:05 GMT
- Title: Semi-supervised Impedance Inversion by Bayesian Neural Network Based on
2-d CNN Pre-training
- Authors: Muyang Ge, Wenlong Wang and Wangxiangming Zheng
- Abstract summary: We improve the semi-supervised learning from two aspects.
First, by replacing 1-d convolutional neural network layers in deep learning structure with 2-d CNN layers and 2-d maxpooling layers, the prediction accuracy is improved.
Second, prediction uncertainty can also be estimated by embedding the network into a Bayesian inference framework.
- Score: 0.966840768820136
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Seismic impedance inversion can be performed with a semi-supervised learning
algorithm, which only needs a few logs as labels and is less likely to get
overfitted. However, classical semi-supervised learning algorithm usually leads
to artifacts on the predicted impedance image. In this artical, we improve the
semi-supervised learning from two aspects. First, by replacing 1-d
convolutional neural network (CNN) layers in deep learning structure with 2-d
CNN layers and 2-d maxpooling layers, the prediction accuracy is improved.
Second, prediction uncertainty can also be estimated by embedding the network
into a Bayesian inference framework. Local reparameterization trick is used
during forward propagation of the network to reduce sampling cost. Tests with
Marmousi2 model and SEAM model validate the feasibility of the proposed
strategy.
Related papers
- Scalable Bayesian Inference in the Era of Deep Learning: From Gaussian Processes to Deep Neural Networks [0.5827521884806072]
Large neural networks trained on large datasets have become the dominant paradigm in machine learning.
This thesis develops scalable methods to equip neural networks with model uncertainty.
arXiv Detail & Related papers (2024-04-29T23:38:58Z) - Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks for
XOR Data [24.86314525762012]
We show that ReLU CNN trained by gradient descent can achieve near Bayes-optimal accuracy.
Our result demonstrates that CNNs have a remarkable capacity to efficiently learn XOR problems, even in the presence of highly correlated features.
arXiv Detail & Related papers (2023-10-03T11:31:37Z) - NODDLE: Node2vec based deep learning model for link prediction [0.0]
We propose NODDLE (integration of NOde2vec anD Deep Learning mEthod), a deep learning model which incorporates the features extracted by node2vec and feeds them into a hidden neural network.
Experimental results show that this method yields better results than the traditional methods on various social network datasets.
arXiv Detail & Related papers (2023-05-25T18:43:52Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - A Simple Approach to Improve Single-Model Deep Uncertainty via
Distance-Awareness [33.09831377640498]
We study approaches to improve uncertainty property of a single network, based on a single, deterministic representation.
We propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs.
On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection.
arXiv Detail & Related papers (2022-05-01T05:46:13Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - EagerNet: Early Predictions of Neural Networks for Computationally
Efficient Intrusion Detection [2.223733768286313]
We propose a new architecture to detect network attacks with minimal resources.
The architecture is able to deal with either binary or multiclass classification problems and trades prediction speed for the accuracy of the network.
arXiv Detail & Related papers (2020-07-27T11:31:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.