Cost-sensitive probabilistic predictions for support vector machines
- URL: http://arxiv.org/abs/2310.05997v1
- Date: Mon, 9 Oct 2023 11:00:17 GMT
- Title: Cost-sensitive probabilistic predictions for support vector machines
- Authors: Sandra Ben\'itez-Pe\~na, Rafael Blanquero, Emilio Carrizosa, Pepa
Ram\'irez-Cobo
- Abstract summary: Support vector machines (SVMs) are widely used and constitute one of the best examined and used machine learning models.
We propose a novel approach to generate probabilistic outputs for the SVM.
- Score: 1.743685428161914
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Support vector machines (SVMs) are widely used and constitute one of the best
examined and used machine learning models for two-class classification.
Classification in SVM is based on a score procedure, yielding a deterministic
classification rule, which can be transformed into a probabilistic rule (as
implemented in off-the-shelf SVM libraries), but is not probabilistic in
nature. On the other hand, the tuning of the regularization parameters in SVM
is known to imply a high computational effort and generates pieces of
information that are not fully exploited, not being used to build a
probabilistic classification rule. In this paper we propose a novel approach to
generate probabilistic outputs for the SVM. The new method has the following
three properties. First, it is designed to be cost-sensitive, and thus the
different importance of sensitivity (or true positive rate, TPR) and
specificity (true negative rate, TNR) is readily accommodated in the model. As
a result, the model can deal with imbalanced datasets which are common in
operational business problems as churn prediction or credit scoring. Second,
the SVM is embedded in an ensemble method to improve its performance, making
use of the valuable information generated in the parameters tuning process.
Finally, the probabilities estimation is done via bootstrap estimates, avoiding
the use of parametric models as competing approaches. Numerical tests on a wide
range of datasets show the advantages of our approach over benchmark
procedures.
Related papers
- Bayesian Estimation and Tuning-Free Rank Detection for Probability Mass Function Tensors [17.640500920466984]
This paper presents a novel framework for estimating the joint PMF and automatically inferring its rank from observed data.
We derive a deterministic solution based on variational inference (VI) to approximate the posterior distributions of various model parameters. Additionally, we develop a scalable version of the VI-based approach by leveraging variational inference (SVI)
Experiments involving both synthetic data and real movie recommendation data illustrate the advantages of our VI and SVI-based methods in terms of estimation accuracy, automatic rank detection, and computational efficiency.
arXiv Detail & Related papers (2024-10-08T20:07:49Z) - Sparse Learning and Class Probability Estimation with Weighted Support
Vector Machines [1.3597551064547502]
weighted Support Vector Machines (wSVMs) have shown great values in robustly predicting the class probability and classification for various problems with high accuracy.
We propose novel wSVMs frameworks that incorporate automatic variable selection with accurate probability estimation for sparse learning problems.
The proposed wSVMs-based sparse learning methods have wide applications and can be further extended to $K$-class problems through ensemble learning.
arXiv Detail & Related papers (2023-12-17T06:12:33Z) - Anomaly Detection Under Uncertainty Using Distributionally Robust
Optimization Approach [0.9217021281095907]
Anomaly detection is defined as the problem of finding data points that do not follow the patterns of the majority.
The one-class Support Vector Machines (SVM) method aims to find a decision boundary to distinguish between normal data points and anomalies.
A distributionally robust chance-constrained model is proposed in which the probability of misclassification is low.
arXiv Detail & Related papers (2023-12-03T06:13:22Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Robust Twin Parametric Margin Support Vector Machine for Multiclass Classification [0.0]
We present novel Twin Parametric Margin Support Vector Machine (TPMSVM) models to tackle the problem of multiclass classification.
We construct bounded-by-norm uncertainty sets around each sample and derive the robust counterpart of deterministic models.
We test the proposed TPMSVM methodology on real-world datasets, showing the good performance of the approach.
arXiv Detail & Related papers (2023-06-09T19:27:24Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Estimating Average Treatment Effects with Support Vector Machines [77.34726150561087]
Support vector machine (SVM) is one of the most popular classification algorithms in the machine learning literature.
We adapt SVM as a kernel-based weighting procedure that minimizes the maximum mean discrepancy between the treatment and control groups.
We characterize the bias of causal effect estimation arising from this trade-off, connecting the proposed SVM procedure to the existing kernel balancing methods.
arXiv Detail & Related papers (2021-02-23T20:22:56Z) - Probabilistic Classification Vector Machine for Multi-Class
Classification [29.411892651468797]
The probabilistic classification vector machine (PCVM) synthesizes the advantages of both the support vector machine and the relevant vector machine.
We extend the PCVM to multi-class cases via voting strategies such as one-vs-rest or one-vs-one.
Two learning algorithms, i.e., one top-down algorithm and one bottom-up algorithm, have been implemented in the mPCVM.
The superior performance of the mPCVMs is extensively evaluated on synthetic and benchmark data sets.
arXiv Detail & Related papers (2020-06-29T03:21:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.