Passive learning to address nonstationarity in virtual flow metering
applications
- URL: http://arxiv.org/abs/2202.03236v1
- Date: Mon, 7 Feb 2022 14:42:00 GMT
- Title: Passive learning to address nonstationarity in virtual flow metering
applications
- Authors: Mathilde Hotvedt, Bjarne Grimstad, Lars Imsland
- Abstract summary: This paper explores how learning methods can be applied to sustain the prediction accuracy of steady-state virtual flow meters.
Two passive learning methods, periodic batch learning and online learning, are applied with varying calibration frequency to train virtual flow meters.
The results are two-fold: first, in the presence of frequently arriving measurements, frequent model updating sustains an excellent prediction performance over time; second, in the presence of intermittent and infrequently arriving measurements, frequent updating is essential to increase the performance accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Steady-state process models are common in virtual flow meter applications due
to low computational complexity, and low model development and maintenance
cost. Nevertheless, the prediction performance of steady-state models typically
degrades with time due to the inherent nonstationarity of the underlying
process being modeled. Few studies have investigated how learning methods can
be applied to sustain the prediction accuracy of steady-state virtual flow
meters. This paper explores passive learning, where the model is frequently
calibrated to new data, as a way to address nonstationarity and improve
long-term performance. An advantage with passive learning is that it is
compatible with models used in the industry. Two passive learning methods,
periodic batch learning and online learning, are applied with varying
calibration frequency to train virtual flow meters. Six different model types,
ranging from data-driven to first-principles, are trained on historical
production data from 10 petroleum wells. The results are two-fold: first, in
the presence of frequently arriving measurements, frequent model updating
sustains an excellent prediction performance over time; second, in the presence
of intermittent and infrequently arriving measurements, frequent updating in
addition to the utilization of expert knowledge is essential to increase the
performance accuracy. The investigation may be of interest to experts
developing soft-sensors for nonstationary processes, such as virtual flow
meters.
Related papers
- A Practitioner's Guide to Continual Multimodal Pretraining [83.63894495064855]
Multimodal foundation models serve numerous applications at the intersection of vision and language.
To keep models updated, research into continual pretraining mainly explores scenarios with either infrequent, indiscriminate updates on large-scale new data, or frequent, sample-level updates.
We introduce FoMo-in-Flux, a continual multimodal pretraining benchmark with realistic compute constraints and practical deployment requirements.
arXiv Detail & Related papers (2024-08-26T17:59:01Z) - Towards An Online Incremental Approach to Predict Students Performance [0.8287206589886879]
We propose a memory-based online incremental learning approach for updating an online classifier.
Our approach achieves a notable improvement in model accuracy, with an enhancement of nearly 10% compared to the current state-of-the-art.
arXiv Detail & Related papers (2024-05-03T17:13:26Z) - Combating Missing Modalities in Egocentric Videos at Test Time [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.
We propose a novel approach to address this issue at test time without requiring retraining.
MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - Improving Online Continual Learning Performance and Stability with
Temporal Ensembles [30.869268130955145]
We study the effect of model ensembling as a way to improve performance and stability in online continual learning.
We use a lightweight temporal ensemble that computes the exponential moving average of the weights (EMA) at test time.
arXiv Detail & Related papers (2023-06-29T09:53:24Z) - On the Costs and Benefits of Adopting Lifelong Learning for Software
Analytics -- Empirical Study on Brown Build and Risk Prediction [17.502553991799832]
This paper evaluates the use of lifelong learning (LL) for industrial use cases at Ubisoft.
LL is used to continuously build and maintain ML-based software analytics tools using an incremental learner that progressively updates the old model using new data.
arXiv Detail & Related papers (2023-05-16T21:57:16Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Efficient Learning of Accurate Surrogates for Simulations of Complex Systems [0.0]
We introduce an online learning method empowered by sampling-driven sampling.
It ensures that all turning points on the model response surface are included in the training data.
We apply our method to simulations of nuclear matter to demonstrate that highly accurate surrogates can be reliably auto-generated.
arXiv Detail & Related papers (2022-07-11T20:51:11Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Tracking Performance of Online Stochastic Learners [57.14673504239551]
Online algorithms are popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.
When a constant step-size is used, these algorithms also have the ability to adapt to drifts in problem parameters, such as data or model properties, and track the optimal solution with reasonable accuracy.
We establish a link between steady-state performance derived under stationarity assumptions and the tracking performance of online learners under random walk models.
arXiv Detail & Related papers (2020-04-04T14:16:27Z) - New Perspectives on the Use of Online Learning for Congestion Level
Prediction over Traffic Data [6.664111208927475]
This work focuses on classification over time series data.
When a time series is generated by non-stationary phenomena, the pattern relating the series with the class to be predicted may evolve over time.
Online learning methods incrementally learn from new data samples arriving over time, and accommodate eventual changes along the data stream.
arXiv Detail & Related papers (2020-03-27T09:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.