Unified SVM Algorithm Based on LS-DC Loss
- URL: http://arxiv.org/abs/2006.09111v4
- Date: Tue, 11 May 2021 03:25:01 GMT
- Title: Unified SVM Algorithm Based on LS-DC Loss
- Authors: Zhou Shuisheng and Zhou Wendi
- Abstract summary: We propose an algorithm that can train different SVM models.
UniSVM has a dominant advantage over all existing algorithms because it has a closed-form solution.
Experiments show that UniSVM can achieve comparable performance in less training time.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past two decades, support vector machine (SVM) has become a popular
supervised machine learning model, and plenty of distinct algorithms are
designed separately based on different KKT conditions of the SVM model for
classification/regression with different losses, including the convex loss or
nonconvex loss. In this paper, we propose an algorithm that can train different
SVM models in a \emph{unified} scheme. First, we introduce a definition of the
\emph{LS-DC} (\textbf{l}east \textbf{s}quares type of \textbf{d}ifference of
\textbf{c}onvex) loss and show that the most commonly used losses in the SVM
community are LS-DC loss or can be approximated by LS-DC loss. Based on DCA
(difference of convex algorithm), we then propose a unified algorithm, called
\emph{UniSVM}, which can solve the SVM model with any convex or nonconvex LS-DC
loss, in which only a vector is computed, especially by the specifically chosen
loss. Particularly, for training robust SVM models with nonconvex losses,
UniSVM has a dominant advantage over all existing algorithms because it has a
closed-form solution per iteration, while the existing algorithms always need
to solve an L1SVM/L2SVM per iteration. Furthermore, by the low-rank
approximation of the kernel matrix, UniSVM can solve the large-scale nonlinear
problems with efficiency. To verify the efficacy and feasibility of the
proposed algorithm, we perform many experiments on some small artificial
problems and some large benchmark tasks with/without outliers for
classification and regression for comparison with state-of-the-art algorithms.
The experimental results demonstrate that UniSVM can achieve comparable
performance in less training time. The foremost advantage of UniSVM is that its
core code in Matlab is less than 10 lines; hence, it can be easily grasped by
users or researchers.
Related papers
- Kernel Support Vector Machine Classifiers with the $\ell_0$-Norm Hinge
Loss [3.007949058551534]
Support Vector Machine (SVM) has been one of the most successful machine learning techniques for binary classification problems.
This paper is concentrated on vectors with hinge loss (referred as $ell$-KSVM), which is a composite function of hinge loss and $ell_$norm.
Experiments on the synthetic and real datasets are illuminated to show that $ell_$-KSVM can achieve comparable accuracy compared with the standard KSVM.
arXiv Detail & Related papers (2023-06-24T14:52:44Z) - Handling Imbalanced Classification Problems With Support Vector Machines
via Evolutionary Bilevel Optimization [73.17488635491262]
Support vector machines (SVMs) are popular learning algorithms to deal with binary classification problems.
This article introduces EBCS-SVM: evolutionary bilevel cost-sensitive SVMs.
arXiv Detail & Related papers (2022-04-21T16:08:44Z) - Nonlinear Kernel Support Vector Machine with 0-1 Soft Margin Loss [13.803988813225025]
We propose a nonlinear model for support vector machine with 0-1 soft margin loss, called $L_0/1$-KSVM, which cunningly involves the kernel technique into it.
$L_0/1$-KSVM has much fewer SVs, simultaneously with a decent predicting accuracy, when compared to its linear peer.
arXiv Detail & Related papers (2022-03-01T12:53:52Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - Covariance-Free Sparse Bayesian Learning [62.24008859844098]
We introduce a new SBL inference algorithm that avoids explicit inversions of the covariance matrix.
Our method can be up to thousands of times faster than existing baselines.
We showcase how our new algorithm enables SBL to tractably tackle high-dimensional signal recovery problems.
arXiv Detail & Related papers (2021-05-21T16:20:07Z) - Estimating Average Treatment Effects with Support Vector Machines [77.34726150561087]
Support vector machine (SVM) is one of the most popular classification algorithms in the machine learning literature.
We adapt SVM as a kernel-based weighting procedure that minimizes the maximum mean discrepancy between the treatment and control groups.
We characterize the bias of causal effect estimation arising from this trade-off, connecting the proposed SVM procedure to the existing kernel balancing methods.
arXiv Detail & Related papers (2021-02-23T20:22:56Z) - A fast learning algorithm for One-Class Slab Support Vector Machines [1.1613446814180841]
This paper proposes fast training method for One Class Slab SVMs using an updated Sequential Minimal Optimization (SMO)
The results indicate that this training method scales better to large sets of training data than other Quadratic Programming (QP) solvers.
arXiv Detail & Related papers (2020-11-06T09:16:39Z) - Probabilistic Classification Vector Machine for Multi-Class
Classification [29.411892651468797]
The probabilistic classification vector machine (PCVM) synthesizes the advantages of both the support vector machine and the relevant vector machine.
We extend the PCVM to multi-class cases via voting strategies such as one-vs-rest or one-vs-one.
Two learning algorithms, i.e., one top-down algorithm and one bottom-up algorithm, have been implemented in the mPCVM.
The superior performance of the mPCVMs is extensively evaluated on synthetic and benchmark data sets.
arXiv Detail & Related papers (2020-06-29T03:21:38Z) - A quantum extension of SVM-perf for training nonlinear SVMs in almost
linear time [0.2855485723554975]
We propose a quantum algorithm for training nonlinear support vector machines (SVM) for feature space learning.
Based on the classical SVM-perf algorithm of Joachims, our algorithm has a running time which scales linearly in the number of training examples.
arXiv Detail & Related papers (2020-06-18T06:25:45Z) - On Coresets for Support Vector Machines [61.928187390362176]
A coreset is a small, representative subset of the original data points.
We show that our algorithm can be used to extend the applicability of any off-the-shelf SVM solver to streaming, distributed, and dynamic data settings.
arXiv Detail & Related papers (2020-02-15T23:25:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.