Equipment Failure Analysis for Oil and Gas Industry with an Ensemble
Predictive Model
- URL: http://arxiv.org/abs/2012.15030v1
- Date: Wed, 30 Dec 2020 04:14:15 GMT
- Title: Equipment Failure Analysis for Oil and Gas Industry with an Ensemble
Predictive Model
- Authors: Chen ZhiYuan, Olugbenro. O. Selere and Nicholas Lu Chee Seng
- Abstract summary: We show that the proposed solution can perform much better when using the SMO training algorithm.
The classification performance of this predictive model is considerably better than the SVM with and without SMO training algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper aims at improving the classification accuracy of a Support Vector
Machine (SVM) classifier with Sequential Minimal Optimization (SMO) training
algorithm in order to properly classify failure and normal instances from oil
and gas equipment data. Recent applications of failure analysis have made use
of the SVM technique without implementing SMO training algorithm, while in our
study we show that the proposed solution can perform much better when using the
SMO training algorithm. Furthermore, we implement the ensemble approach, which
is a hybrid rule based and neural network classifier to improve the performance
of the SVM classifier (with SMO training algorithm). The optimization study is
as a result of the underperformance of the classifier when dealing with
imbalanced dataset. The selected best performing classifiers are combined
together with SVM classifier (with SMO training algorithm) by using the
stacking ensemble method which is to create an efficient ensemble predictive
model that can handle the issue of imbalanced data. The classification
performance of this predictive model is considerably better than the SVM with
and without SMO training algorithm and many other conventional classifiers.
Related papers
- Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - Methods for Class-Imbalanced Learning with Support Vector Machines: A Review and an Empirical Evaluation [22.12895887111828]
We introduce a hierarchical categorization of SVM-based models with respect to class-imbalanced learning.
We compare the performances of various representative SVM-based models in each category using benchmark imbalanced data sets.
Our findings reveal that while algorithmic methods are less time-consuming owing to no data pre-processing requirements, fusion methods, which combine both re-sampling and algorithmic approaches, generally perform the best.
arXiv Detail & Related papers (2024-06-05T15:55:08Z) - Smooth Ranking SVM via Cutting-Plane Method [6.946903076677842]
We develop a prototype learning approach that relies on cutting-plane method, similar to Ranking SVM, to maximize AUC.
Our algorithm learns simpler models by iteratively introducing cutting planes, thus overfitting is prevented in an unconventional way.
Based on the experiments conducted on 73 binary classification datasets, our method yields the best test AUC in 25 datasets among its relevant competitors.
arXiv Detail & Related papers (2024-01-25T18:47:23Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Handling Imbalanced Classification Problems With Support Vector Machines
via Evolutionary Bilevel Optimization [73.17488635491262]
Support vector machines (SVMs) are popular learning algorithms to deal with binary classification problems.
This article introduces EBCS-SVM: evolutionary bilevel cost-sensitive SVMs.
arXiv Detail & Related papers (2022-04-21T16:08:44Z) - Estimating Average Treatment Effects with Support Vector Machines [77.34726150561087]
Support vector machine (SVM) is one of the most popular classification algorithms in the machine learning literature.
We adapt SVM as a kernel-based weighting procedure that minimizes the maximum mean discrepancy between the treatment and control groups.
We characterize the bias of causal effect estimation arising from this trade-off, connecting the proposed SVM procedure to the existing kernel balancing methods.
arXiv Detail & Related papers (2021-02-23T20:22:56Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Unsupervised Real Time Prediction of Faults Using the Support Vector
Machine [1.1852751647387592]
We show that the proposed solution can perform much better when using the SMO training algorithm.
The classification performance of this predictive model is considerably better than the SVM with and without SMO training algorithm.
arXiv Detail & Related papers (2020-12-30T04:27:10Z) - A fast learning algorithm for One-Class Slab Support Vector Machines [1.1613446814180841]
This paper proposes fast training method for One Class Slab SVMs using an updated Sequential Minimal Optimization (SMO)
The results indicate that this training method scales better to large sets of training data than other Quadratic Programming (QP) solvers.
arXiv Detail & Related papers (2020-11-06T09:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.