Efficient selective attention LSTM for well log curve synthesis
- URL: http://arxiv.org/abs/2307.10253v3
- Date: Wed, 3 Jan 2024 04:51:27 GMT
- Title: Efficient selective attention LSTM for well log curve synthesis
- Authors: Yuankai Zhou, Huanyu Li
- Abstract summary: This paper proposes a machine learning method that utilizes existing data to predict missing data.
The proposed method builds on the traditional Long Short-Term Memory (LSTM) neural network by incorporating a self-attention mechanism.
Experimental results demonstrate that the proposed method achieves higher accuracy compared to traditional curve synthesis methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-core drilling has gradually become the primary exploration method in
geological exploration engineering, and well logging curves have increasingly
gained importance as the main carriers of geological information. However,
factors such as geological environment, logging equipment, borehole quality,
and unexpected events can all impact the quality of well logging curves.
Previous methods of re-logging or manual corrections have been associated with
high costs and low efficiency. This paper proposes a machine learning method
that utilizes existing data to predict missing data, and its effectiveness and
feasibility have been validated through field experiments. The proposed method
builds on the traditional Long Short-Term Memory (LSTM) neural network by
incorporating a self-attention mechanism to analyze the sequential dependencies
of the data. It selects the dominant computational results in the LSTM,
reducing the computational complexity from O(n^2) to O(nlogn) and improving
model efficiency. Experimental results demonstrate that the proposed method
achieves higher accuracy compared to traditional curve synthesis methods based
on Fully Connected Neural Networks (FCNN) and vanilla LSTM. This accurate,
efficient, and cost-effective prediction method holds a practical value in
engineering applications.
Related papers
- Enhancing Cognitive Workload Classification Using Integrated LSTM Layers and CNNs for fNIRS Data Analysis [13.74551296919155]
This paper explores the im-pact of Long Short-Term Memory layers on the effectiveness of Convolutional Neural Networks (CNNs) within deep learning models.
By integrating LSTM layers, the model can capture temporal dependencies in the fNIRS data, al-lowing for a more comprehensive understanding of cognitive states.
arXiv Detail & Related papers (2024-07-22T11:28:34Z) - High-Resolution Detection of Earth Structural Heterogeneities from Seismic Amplitudes using Convolutional Neural Networks with Attention layers [0.31457219084519]
We propose an efficient and cost-effective architecture for detecting seismic structural heterogeneities using Convolutional Neural Networks (CNNs) combined with Attention layers.
Our model has half the parameters compared to the state-of-the-art, and it outperforms previous methods in terms of Intersection over Union (IoU) by 0.6% and precision by 0.4%.
arXiv Detail & Related papers (2024-04-15T22:49:37Z) - Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets [53.367212596352324]
We propose an unsupervised approach leveraging EEG signal physics.
We map EEG channels to fixed positions using field, source-free domain adaptation.
Our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications.
arXiv Detail & Related papers (2024-03-07T16:17:33Z) - Robust Neural Pruning with Gradient Sampling Optimization for Residual Neural Networks [0.0]
This research embarks on pioneering the integration of gradient sampling optimization techniques, particularly StochGradAdam, into the pruning process of neural networks.
Our main objective is to address the significant challenge of maintaining accuracy in pruned neural models, critical in resource-constrained scenarios.
arXiv Detail & Related papers (2023-12-26T12:19:22Z) - Low-Frequency Load Identification using CNN-BiLSTM Attention Mechanism [0.0]
Non-intrusive Load Monitoring (NILM) is an established technique for effective and cost-efficient electricity consumption management.
This paper presents a hybrid learning approach, consisting of a convolutional neural network (CNN) and a bidirectional long short-term memory (BILSTM)
CNN-BILSTM model is adept at extracting both temporal (time-related) and spatial (location-related) features, allowing it to precisely identify energy consumption patterns at the appliance level.
arXiv Detail & Related papers (2023-11-14T21:02:27Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural
Networks [89.28881869440433]
This paper provides the first theoretical characterization of joint edge-model sparse learning for graph neural networks (GNNs)
It proves analytically that both sampling important nodes and pruning neurons with the lowest-magnitude can reduce the sample complexity and improve convergence without compromising the test accuracy.
arXiv Detail & Related papers (2023-02-06T16:54:20Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - CAFE: Learning to Condense Dataset by Aligning Features [72.99394941348757]
We propose a novel scheme to Condense dataset by Aligning FEatures (CAFE)
At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales.
We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art.
arXiv Detail & Related papers (2022-03-03T05:58:49Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Drill the Cork of Information Bottleneck by Inputting the Most Important
Data [28.32769151293851]
How to efficiently train deep neural networks remains to be solved.
The information bottleneck (IB) theory claims that the optimization process consists of an initial fitting phase and the following compression phase.
We show that the fitting phase depicted in the IB theory will be boosted with a high signal-to-noise ratio if the typicality sampling is appropriately adopted.
arXiv Detail & Related papers (2021-05-15T09:20:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.