Multi-task learning for virtual flow metering
- URL: http://arxiv.org/abs/2103.08713v1
- Date: Mon, 15 Mar 2021 20:52:40 GMT
- Title: Multi-task learning for virtual flow metering
- Authors: Anders T. Sandnes (1 and 2), Bjarne Grimstad (1 and 3), Odd
Kolbj{\o}rnsen (2) ((1) Solution Seeker AS, (2) Department of Mathematics,
University of Oslo, (3) Department of Engineering Cybernetics, Norwegian
University of Science and Technology)
- Abstract summary: We propose a new multi-task learning architecture for data-driven VFM.
Our findings show that MTL improves robustness over single task methods, without sacrificing performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual flow metering (VFM) is a cost-effective and non-intrusive technology
for inferring multi-phase flow rates in petroleum assets. Inferences about flow
rates are fundamental to decision support systems which operators extensively
rely on. Data-driven VFM, where mechanistic models are replaced with machine
learning models, has recently gained attention due to its promise of lower
maintenance costs. While excellent performance in small sample studies have
been reported in the literature, there is still considerable doubt towards the
robustness of data-driven VFM. In this paper we propose a new multi-task
learning (MTL) architecture for data-driven VFM. Our method differs from
previous methods in that it enables learning across oil and gas wells. We study
the method by modeling 55 wells from four petroleum assets. Our findings show
that MTL improves robustness over single task methods, without sacrificing
performance. MTL yields a 25-50% error reduction on average for the assets
where single task architectures are struggling.
Related papers
- Automatic Evaluation for Text-to-image Generation: Task-decomposed Framework, Distilled Training, and Meta-evaluation Benchmark [62.58869921806019]
We propose a task decomposition evaluation framework based on GPT-4o to automatically construct a new training dataset.
We design innovative training strategies to effectively distill GPT-4o's evaluation capabilities into a 7B open-source MLLM, MiniCPM-V-2.6.
Experimental results demonstrate that our distilled open-source MLLM significantly outperforms the current state-of-the-art GPT-4o-base baseline.
arXiv Detail & Related papers (2024-11-23T08:06:06Z) - Mini-InternVL: A Flexible-Transfer Pocket Multimodal Model with 5% Parameters and 90% Performance [78.48606021719206]
Mini-InternVL is a series of MLLMs with parameters ranging from 1B to 4B, which achieves 90% of the performance with only 5% of the parameters.
We develop a unified adaptation framework for Mini-InternVL, which enables our models to transfer and outperform specialized models in downstream tasks.
arXiv Detail & Related papers (2024-10-21T17:58:20Z) - STLLM-DF: A Spatial-Temporal Large Language Model with Diffusion for Enhanced Multi-Mode Traffic System Forecasting [32.943673568195315]
We propose the Spatial-Temporal Large Language Model (STLLM-DF) to improve multi-task transportation prediction.
The DDPM's robust denoising capabilities enable it to recover underlying data patterns from noisy inputs.
We show that STLLM-DF consistently outperforms existing models, achieving an average reduction of 2.40% in MAE, 4.50% in RMSE, and 1.51% in MAPE.
arXiv Detail & Related papers (2024-09-08T15:29:27Z) - Physics Informed Machine Learning (PIML) methods for estimating the remaining useful lifetime (RUL) of aircraft engines [0.0]
This paper is aimed at using the newly developing field of physics informed machine learning (PIML) to develop models for predicting the remaining useful lifetime (RUL) aircraft engines.
We consider the well-known benchmark NASA Commercial Modular Aero-Propulsion System Simulation System (C-MAPSS) data as the main data for this paper.
C-MAPSS is a well-studied dataset with much existing work in the literature that address RUL prediction with classical and deep learning methods.
arXiv Detail & Related papers (2024-06-21T19:55:34Z) - Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning [12.00246872965739]
We propose a novel dynamic self-adaptive multiscale distillation from pre-trained multimodal large model.
Our strategy employs a multiscale perspective, enabling the extraction structural knowledge across from the pre-trained multimodal large model.
Our methodology streamlines pre-trained multimodal large models using only their output features and original image-level information.
arXiv Detail & Related papers (2024-04-16T18:22:49Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - Towards Efficient Task-Driven Model Reprogramming with Foundation Models [52.411508216448716]
Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data.
However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations.
This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task.
arXiv Detail & Related papers (2023-04-05T07:28:33Z) - On gray-box modeling for virtual flow metering [0.0]
A virtual flow meter (VFM) enables continuous prediction of flow rates in petroleum production systems.
Gray-box modeling is an approach that combines mechanistic and data-driven modeling.
This article investigates five different gray-box model types in an industrial case study on 10 petroleum wells.
arXiv Detail & Related papers (2021-03-23T13:17:38Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.