Enhancing Pattern Classification in Support Vector Machines through
Matrix Formulation
- URL: http://arxiv.org/abs/2307.09372v1
- Date: Tue, 18 Jul 2023 15:56:39 GMT
- Title: Enhancing Pattern Classification in Support Vector Machines through
Matrix Formulation
- Authors: Sambhav Jain Reshma Rastogi
- Abstract summary: The reliance on vector-based formulations in existing SVM-based models poses limitations regarding flexibility and ease of incorporating additional terms to handle specific challenges.
We introduce a matrix formulation for SVM that effectively addresses these constraints.
Experimental evaluations on multilabel and multiclass datasets demonstrate that Matrix SVM achieves superior time efficacy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Support Vector Machines (SVM) have gathered significant acclaim as
classifiers due to their successful implementation of Statistical Learning
Theory. However, in the context of multiclass and multilabel settings, the
reliance on vector-based formulations in existing SVM-based models poses
limitations regarding flexibility and ease of incorporating additional terms to
handle specific challenges. To overcome these limitations, our research paper
focuses on introducing a matrix formulation for SVM that effectively addresses
these constraints. By employing the Accelerated Gradient Descent method in the
dual, we notably enhance the efficiency of solving the Matrix-SVM problem.
Experimental evaluations on multilabel and multiclass datasets demonstrate that
Matrix SVM achieves superior time efficacy while delivering similar results to
Binary Relevance SVM.
Moreover, our matrix formulation unveils crucial insights and advantages that
may not be readily apparent in traditional vector-based notations. We emphasize
that numerous multilabel models can be viewed as extensions of SVM, with
customised modifications to meet specific requirements. The matrix formulation
presented in this paper establishes a solid foundation for developing more
sophisticated models capable of effectively addressing the distinctive
challenges encountered in multilabel learning.
Related papers
- Tender: Accelerating Large Language Models via Tensor Decomposition and Runtime Requantization [0.6445087473595953]
Large language models (LLMs) demonstrate outstanding performance in various tasks in machine learning.
deploying LLM inference poses challenges due to the high compute and memory requirements.
We present Tender, an algorithm-hardware co-design solution that enables efficient deployment of LLM inference at low precision.
arXiv Detail & Related papers (2024-06-16T09:51:55Z) - Misam: Using ML in Dataflow Selection of Sparse-Sparse Matrix Multiplication [0.8363939984237685]
Sparse matrix-matrix multiplication (SpGEMM) is a critical operation in scientific computing, graph analytics, and deep learning.
Traditional hardware accelerators are tailored for specific sparsity patterns with fixed dataflow schemes.
This paper presents a machine learning based approach for adaptively selecting the most appropriate dataflow scheme for SpGEMM tasks.
arXiv Detail & Related papers (2024-06-14T16:36:35Z) - Multi-class Support Vector Machine with Maximizing Minimum Margin [67.51047882637688]
Support Vector Machine (SVM) is a prominent machine learning technique widely applied in pattern recognition tasks.
We propose a novel method for multi-class SVM that incorporates pairwise class loss considerations and maximizes the minimum margin.
Empirical evaluations demonstrate the effectiveness and superiority of our proposed method over existing multi-classification methods.
arXiv Detail & Related papers (2023-12-11T18:09:55Z) - Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs [65.42104819071444]
Multitask learning (MTL) leverages task-relatedness to enhance performance.
We employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices.
We propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs)
arXiv Detail & Related papers (2023-08-30T14:28:26Z) - Multi-constrained Symmetric Nonnegative Latent Factor Analysis for
Accurately Representing Large-scale Undirected Weighted Networks [2.1797442801107056]
An Undirected Weighted Network (UWN) is frequently encountered in a big-data-related application.
An analysis model should carefully consider its symmetric-topology for describing an UWN's intrinsic symmetry.
This paper proposes a Multi-constrained Symmetric Nonnegative Latent-factor-analysis model with two-fold ideas.
arXiv Detail & Related papers (2023-06-06T14:13:16Z) - Unitary Approximate Message Passing for Matrix Factorization [90.84906091118084]
We consider matrix factorization (MF) with certain constraints, which finds wide applications in various areas.
We develop a Bayesian approach to MF with an efficient message passing implementation, called UAMPMF.
We show that UAMPMF significantly outperforms state-of-the-art algorithms in terms of recovery accuracy, robustness and computational complexity.
arXiv Detail & Related papers (2022-07-31T12:09:32Z) - Adaptive Discrete Communication Bottlenecks with Dynamic Vector
Quantization [76.68866368409216]
We propose learning to dynamically select discretization tightness conditioned on inputs.
We show that dynamically varying tightness in communication bottlenecks can improve model performance on visual reasoning and reinforcement learning tasks.
arXiv Detail & Related papers (2022-02-02T23:54:26Z) - Multi-view learning with privileged weighted twin support vector machine [0.0]
Weighted twin support vector machines (WLTSVM) mines as much potential similarity information in samples as possible to improve the common short-coming of non-parallel plane classifiers.
Compared with twin support vector machines (TWSVM), it reduces the time complexity by deleting the superfluous constraints using the inter-class K-Nearest Neighbor (KNN)
In this paper, we propose multi-view learning with privileged weighted twin support vector machines (MPWTSVM)
arXiv Detail & Related papers (2022-01-27T03:49:53Z) - Learning Log-Determinant Divergences for Positive Definite Matrices [47.61701711840848]
In this paper, we propose to learn similarity measures in a data-driven manner.
We capitalize on the alphabeta-log-det divergence, which is a meta-divergence parametrized by scalars alpha and beta.
Our key idea is to cast these parameters in a continuum and learn them from data.
arXiv Detail & Related papers (2021-04-13T19:09:43Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.