Multi-task learning for virtual flow metering
- URL: http://arxiv.org/abs/2103.08713v1
- Date: Mon, 15 Mar 2021 20:52:40 GMT
- Title: Multi-task learning for virtual flow metering
- Authors: Anders T. Sandnes (1 and 2), Bjarne Grimstad (1 and 3), Odd
Kolbj{\o}rnsen (2) ((1) Solution Seeker AS, (2) Department of Mathematics,
University of Oslo, (3) Department of Engineering Cybernetics, Norwegian
University of Science and Technology)
- Abstract summary: We propose a new multi-task learning architecture for data-driven VFM.
Our findings show that MTL improves robustness over single task methods, without sacrificing performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual flow metering (VFM) is a cost-effective and non-intrusive technology
for inferring multi-phase flow rates in petroleum assets. Inferences about flow
rates are fundamental to decision support systems which operators extensively
rely on. Data-driven VFM, where mechanistic models are replaced with machine
learning models, has recently gained attention due to its promise of lower
maintenance costs. While excellent performance in small sample studies have
been reported in the literature, there is still considerable doubt towards the
robustness of data-driven VFM. In this paper we propose a new multi-task
learning (MTL) architecture for data-driven VFM. Our method differs from
previous methods in that it enables learning across oil and gas wells. We study
the method by modeling 55 wells from four petroleum assets. Our findings show
that MTL improves robustness over single task methods, without sacrificing
performance. MTL yields a 25-50% error reduction on average for the assets
where single task architectures are struggling.
Related papers
- Physics Informed Machine Learning (PIML) methods for estimating the remaining useful lifetime (RUL) of aircraft engines [0.0]
This paper is aimed at using the newly developing field of physics informed machine learning (PIML) to develop models for predicting the remaining useful lifetime (RUL) aircraft engines.
We consider the well-known benchmark NASA Commercial Modular Aero-Propulsion System Simulation System (C-MAPSS) data as the main data for this paper.
C-MAPSS is a well-studied dataset with much existing work in the literature that address RUL prediction with classical and deep learning methods.
arXiv Detail & Related papers (2024-06-21T19:55:34Z) - Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning [12.00246872965739]
We propose a novel dynamic self-adaptive multiscale distillation from pre-trained multimodal large model.
Our strategy employs a multiscale perspective, enabling the extraction structural knowledge across from the pre-trained multimodal large model.
Our methodology streamlines pre-trained multimodal large models using only their output features and original image-level information.
arXiv Detail & Related papers (2024-04-16T18:22:49Z) - When Parameter-efficient Tuning Meets General-purpose Vision-language
Models [65.19127815275307]
PETAL revolutionizes the training process by requiring only 0.5% of the total parameters, achieved through a unique mode approximation technique.
Our experiments reveal that PETAL not only outperforms current state-of-the-art methods in most scenarios but also surpasses full fine-tuning models in effectiveness.
arXiv Detail & Related papers (2023-12-16T17:13:08Z) - RoAST: Robustifying Language Models via Adversarial Perturbation with
Selective Training [105.02614392553198]
We propose Robustifying LMs via Adversarial perturbation with Selective Training (RoAST)
RoAST incorporates two important sources for the model robustness, robustness on the perturbed inputs and generalizable knowledge in pre-trained LMs.
We demonstrate the effectiveness of RoAST compared to state-of-the-art fine-tuning methods on six different types of LMs.
arXiv Detail & Related papers (2023-12-07T04:23:36Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - Towards Efficient Task-Driven Model Reprogramming with Foundation Models [52.411508216448716]
Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data.
However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations.
This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task.
arXiv Detail & Related papers (2023-04-05T07:28:33Z) - On gray-box modeling for virtual flow metering [0.0]
A virtual flow meter (VFM) enables continuous prediction of flow rates in petroleum production systems.
Gray-box modeling is an approach that combines mechanistic and data-driven modeling.
This article investigates five different gray-box model types in an industrial case study on 10 petroleum wells.
arXiv Detail & Related papers (2021-03-23T13:17:38Z) - A Comprehensive Evaluation of Multi-task Learning and Multi-task
Pre-training on EHR Time-series Data [0.0]
Multi-task learning (MTL) is a machine learning technique aiming to improve model performance by leveraging information across many tasks.
In this work, we examine MTL across a battery of tasks on EHR time-series data.
We find that while MTL does suffer from common negative transfer, we can realize significant gains via MTL pre-training combined with single-task fine-tuning.
arXiv Detail & Related papers (2020-07-20T15:19:28Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.