A Case Study on the Classification of Lost Circulation Events During
Drilling using Machine Learning Techniques on an Imbalanced Large Dataset
- URL: http://arxiv.org/abs/2209.01607v2
- Date: Wed, 7 Sep 2022 16:40:11 GMT
- Title: A Case Study on the Classification of Lost Circulation Events During
Drilling using Machine Learning Techniques on an Imbalanced Large Dataset
- Authors: Toluwalase A. Olukoga, Yin Feng
- Abstract summary: We utilize a 65,000+ records data with class imbalance problem from Azadegan oilfield formations in Iran.
Eleven of the dataset's seventeen parameters are chosen to be used in the classification of five lost circulation events.
To generate classification models, we used six basic machine learning algorithms and four ensemble learning methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents machine learning models that forecast and categorize lost
circulation severity preemptively using a large class imbalanced drilling
dataset. We demonstrate reproducible core techniques involved in tackling a
large drilling engineering challenge utilizing easily interpretable machine
learning approaches.
We utilized a 65,000+ records data with class imbalance problem from Azadegan
oilfield formations in Iran. Eleven of the dataset's seventeen parameters are
chosen to be used in the classification of five lost circulation events. To
generate classification models, we used six basic machine learning algorithms
and four ensemble learning methods. Linear Discriminant Analysis (LDA),
Logistic Regression (LR), Support Vector Machines (SVM), Classification and
Regression Trees (CART), k-Nearest Neighbors (KNN), and Gaussian Naive Bayes
(GNB) are the six fundamental techniques. We also used bagging and boosting
ensemble learning techniques in the investigation of solutions for improved
predicting performance. The performance of these algorithms is measured using
four metrics: accuracy, precision, recall, and F1-score. The F1-score weighted
to represent the data imbalance is chosen as the preferred evaluation
criterion.
The CART model was found to be the best in class for identifying drilling
fluid circulation loss events with an average weighted F1-score of 0.9904 and
standard deviation of 0.0015. Upon application of ensemble learning techniques,
a Random Forest ensemble of decision trees showed the best predictive
performance. It identified and classified lost circulation events with a
perfect weighted F1-score of 1.0. Using Permutation Feature Importance (PFI),
the measured depth was found to be the most influential factor in accurately
recognizing lost circulation events while drilling.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - A comparative study on machine learning approaches for rock mass classification using drilling data [0.3749861135832073]
Current rock engineering design in drill and blast tunnelling relies on engineers' observational assessments.
Measure While Drilling (MWD) data, a high-resolution sensor dataset collected during tunnel excavation, is underutilised.
This study aims to automate the translation of MWD data into actionable metrics for rock engineering.
arXiv Detail & Related papers (2024-03-15T15:37:19Z) - Efficient Grammatical Error Correction Via Multi-Task Training and
Optimized Training Schedule [55.08778142798106]
We propose auxiliary tasks that exploit the alignment between the original and corrected sentences.
We formulate each task as a sequence-to-sequence problem and perform multi-task training.
We find that the order of datasets used for training and even individual instances within a dataset may have important effects on the final performance.
arXiv Detail & Related papers (2023-11-20T14:50:12Z) - Blending gradient boosted trees and neural networks for point and
probabilistic forecasting of hierarchical time series [0.0]
We describe a blending methodology of machine learning models that belong to gradient boosted trees and neural networks families.
These principles were successfully applied in the recent M5 Competition on both Accuracy and Uncertainty tracks.
arXiv Detail & Related papers (2023-10-19T09:42:02Z) - Convolutional Neural Networks for the classification of glitches in
gravitational-wave data streams [52.77024349608834]
We classify transient noise signals (i.e.glitches) and gravitational waves in data from the Advanced LIGO detectors.
We use models with a supervised learning approach, both trained from scratch using the Gravity Spy dataset.
We also explore a self-supervised approach, pre-training models with automatically generated pseudo-labels.
arXiv Detail & Related papers (2023-03-24T11:12:37Z) - Estimating oil recovery factor using machine learning: Applications of
XGBoost classification [0.0]
In petroleum engineering, it is essential to determine the ultimate recovery factor, RF, particularly before exploitation and exploration.
We, therefore, applied machine learning (ML), using readily available features, to estimate oil RF for ten classes defined in this study.
arXiv Detail & Related papers (2022-10-28T18:21:25Z) - Fraud Detection Using Optimized Machine Learning Tools Under Imbalance
Classes [0.304585143845864]
Fraud detection with smart versions of machine learning (ML) tools is essential to assure safety.
We investigate four state-of-the-art ML techniques, namely, logistic regression, decision trees, random forest, and extreme gradient boost.
For phishing website URLs and credit card fraud transaction datasets, the results indicate that extreme gradient boost trained on the original data shows trustworthy performance.
arXiv Detail & Related papers (2022-09-04T15:30:23Z) - Heterogeneous Ensemble Learning for Enhanced Crash Forecasts -- A
Frequentest and Machine Learning based Stacking Framework [0.803552105641624]
In this study, we apply one of the key HEM methods, Stacking, to model crash frequency on five lane undivided segments (5T) of urban and suburban arterials.
The prediction performance of Stacking is compared with parametric statistical models (Poisson and negative binomial) and three state of the art machine learning techniques (Decision tree, random forest, and gradient boosting)
arXiv Detail & Related papers (2022-07-21T19:15:53Z) - Continual Learning For On-Device Environmental Sound Classification [63.81276321857279]
We propose a simple and efficient continual learning method for on-device environmental sound classification.
Our method selects the historical data for the training by measuring the per-sample classification uncertainty.
arXiv Detail & Related papers (2022-07-15T12:13:04Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.