Domain adaption and physical constrains transfer learning for shale gas
production
- URL: http://arxiv.org/abs/2312.10920v1
- Date: Mon, 18 Dec 2023 04:13:27 GMT
- Title: Domain adaption and physical constrains transfer learning for shale gas
production
- Authors: Zhaozhong Yang, Liangjie Gou, Chao Min, Duo Yi, Xiaogang Li and
Guoquan Wen
- Abstract summary: We propose a novel transfer learning methodology that utilizes domain adaptation and physical constraints.
This methodology effectively employs historical data from the source domain to reduce negative transfer from the data distribution perspective.
By incorporating drilling, completion, and geological data as physical constraints, we develop a hybrid model.
- Score: 0.26440512250125126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective prediction of shale gas production is crucial for strategic
reservoir development. However, in new shale gas blocks, two main challenges
are encountered: (1) the occurrence of negative transfer due to insufficient
data, and (2) the limited interpretability of deep learning (DL) models. To
tackle these problems, we propose a novel transfer learning methodology that
utilizes domain adaptation and physical constraints. This methodology
effectively employs historical data from the source domain to reduce negative
transfer from the data distribution perspective, while also using physical
constraints to build a robust and reliable prediction model that integrates
various types of data. The methodology starts by dividing the production data
from the source domain into multiple subdomains, thereby enhancing data
diversity. It then uses Maximum Mean Discrepancy (MMD) and global average
distance measures to decide on the feasibility of transfer. Through domain
adaptation, we integrate all transferable knowledge, resulting in a more
comprehensive target model. Lastly, by incorporating drilling, completion, and
geological data as physical constraints, we develop a hybrid model. This model,
a combination of a multi-layer perceptron (MLP) and a Transformer
(Transformer-MLP), is designed to maximize interpretability. Experimental
validation in China's southwestern region confirms the method's effectiveness.
Related papers
- EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer [21.59850502993888]
Unsupervised domain adaptation (UDA) aims to mitigate the domain shift issue, where the distribution of training (source) data differs from that of testing (target) data.
Many models have been developed to tackle this problem, and recently vision transformers (ViTs) have shown promising results.
This paper introduces an efficient model that reduces trainable parameters and allows for adjustable complexity.
arXiv Detail & Related papers (2024-07-31T03:29:28Z) - Simulation-Enhanced Data Augmentation for Machine Learning Pathloss
Prediction [9.664420734674088]
This paper introduces a novel simulation-enhanced data augmentation method for machine learning pathloss prediction.
Our method integrates synthetic data generated from a cellular coverage simulator and independently collected real-world datasets.
The integration of synthetic data significantly improves the generalizability of the model in different environments.
arXiv Detail & Related papers (2024-02-03T00:38:08Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - One-Shot Domain Adaptive and Generalizable Semantic Segmentation with
Class-Aware Cross-Domain Transformers [96.51828911883456]
Unsupervised sim-to-real domain adaptation (UDA) for semantic segmentation aims to improve the real-world test performance of a model trained on simulated data.
Traditional UDA often assumes that there are abundant unlabeled real-world data samples available during training for the adaptation.
We explore the one-shot unsupervised sim-to-real domain adaptation (OSUDA) and generalization problem, where only one real-world data sample is available.
arXiv Detail & Related papers (2022-12-14T15:54:15Z) - LAMA-Net: Unsupervised Domain Adaptation via Latent Alignment and
Manifold Learning for RUL Prediction [0.0]
We propose textitLAMA-Net, an encoder-decoder based model (Transformer) with an induced bottleneck, Latent Alignment using Mean Maximum Discrepancy (MMD) and manifold learning.
The proposed method offers a promising approach to perform domain adaptation in RUL prediction.
arXiv Detail & Related papers (2022-08-17T16:28:20Z) - Learning Neural Models for Natural Language Processing in the Face of
Distributional Shift [10.990447273771592]
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications.
It builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time.
This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information.
It is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime
arXiv Detail & Related papers (2021-09-03T14:29:20Z) - Learning Invariant Representation with Consistency and Diversity for
Semi-supervised Source Hypothesis Transfer [46.68586555288172]
We propose a novel task named Semi-supervised Source Hypothesis Transfer (SSHT), which performs domain adaptation based on source trained model, to generalize well in target domain with a few supervisions.
We propose Consistency and Diversity Learning (CDL), a simple but effective framework for SSHT by facilitating prediction consistency between two randomly augmented unlabeled data.
Experimental results show that our method outperforms existing SSDA methods and unsupervised model adaptation methods on DomainNet, Office-Home and Office-31 datasets.
arXiv Detail & Related papers (2021-07-07T04:14:24Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z) - A Simple Baseline to Semi-Supervised Domain Adaptation for Machine
Translation [73.3550140511458]
State-of-the-art neural machine translation (NMT) systems are data-hungry and perform poorly on new domains with no supervised data.
We propose a simple but effect approach to the semi-supervised domain adaptation scenario of NMT.
This approach iteratively trains a Transformer-based NMT model via three training objectives: language modeling, back-translation, and supervised translation.
arXiv Detail & Related papers (2020-01-22T16:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.