Solution Path Algorithm for Twin Multi-class Support Vector Machine
- URL: http://arxiv.org/abs/2006.00276v1
- Date: Sat, 30 May 2020 14:05:46 GMT
- Title: Solution Path Algorithm for Twin Multi-class Support Vector Machine
- Authors: Liuyuan Chen, Kanglei Zhou, Junchang Jing, Haiju Fan, Juntao Li
- Abstract summary: The paper is devoted to the fast regularization parameter tuning algorithm for the twin multi-class support vector machine.
A new sample dataset division method is adopted and the Lagrangian multipliers are proved to be piecewise linear.
The proposed method can achieve good classification performance with reducing the computational cost of grid search method from exponential level to the constant level.
- Score: 6.97711662470035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The twin support vector machine and its extensions have made great
achievements in dealing with binary classification problems, however, which is
faced with some difficulties such as model selection and solving
multi-classification problems quickly. This paper is devoted to the fast
regularization parameter tuning algorithm for the twin multi-class support
vector machine. A new sample dataset division method is adopted and the
Lagrangian multipliers are proved to be piecewise linear with respect to the
regularization parameters by combining the linear equations and block matrix
theory. Eight kinds of events are defined to seek for the starting event and
then the solution path algorithm is designed, which greatly reduces the
computational cost. In addition, only few points are combined to complete the
initialization and Lagrangian multipliers are proved to be 1 as the
regularization parameter tends to infinity. Simulation results based on UCI
datasets show that the proposed method can achieve good classification
performance with reducing the computational cost of grid search method from
exponential level to the constant level.
Related papers
- An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Fast Projected Newton-like Method for Precision Matrix Estimation under
Total Positivity [15.023842222803058]
Current algorithms are designed using the block coordinate descent method or the proximal point algorithm.
We propose a novel algorithm based on the two-metric projection method, incorporating a carefully designed search direction and variable partitioning scheme.
Experimental results on synthetic and real-world datasets demonstrate that our proposed algorithm provides a significant improvement in computational efficiency compared to the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-03T14:39:10Z) - Accelerated nonlinear primal-dual hybrid gradient algorithms with
applications to machine learning [0.0]
primal-dual hybrid gradient (PDHG) is a first-order method that splits convex optimization problems with saddle-point structure into smaller subproblems.
PDHG requires stepsize parameters fine-tuned for the problem at hand.
We introduce accelerated nonlinear variants of the PDHG algorithm that can achieve, for a broad class of optimization problems relevant to machine learning.
arXiv Detail & Related papers (2021-09-24T22:37:10Z) - Minimal Cycle Representatives in Persistent Homology using Linear
Programming: an Empirical Study with User's Guide [4.46514714749204]
Cycle representatives of persistent homology classes can be used to provide descriptions of topological features in data.
One approach to solving this problem is to optimize the choice of representative against some measure that is meaningful in the context of the data.
arXiv Detail & Related papers (2021-05-14T18:38:48Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - SONIA: A Symmetric Blockwise Truncated Optimization Algorithm [2.9923891863939938]
This work presents a new algorithm for empirical risk.
The algorithm bridges the gap between first- and second-order search methods by computing a second-order search-type update in one subspace, coupled with a scaled steepest descent step in the Theoretical complement.
arXiv Detail & Related papers (2020-06-06T19:28:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.