Majorization-Minimization for sparse SVMs
- URL: http://arxiv.org/abs/2308.16858v1
- Date: Thu, 31 Aug 2023 17:03:16 GMT
- Title: Majorization-Minimization for sparse SVMs
- Authors: Alessandro Benfenati, Emilie Chouzenoux, Giorgia Franchini, Salla
Latva-Aijo, Dominik Narnhofer, Jean-Christophe Pesquet, Sebastian J. Scott,
Mahsa Yousefi
- Abstract summary: Support Vector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework, several decades ago.
They often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena.
In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization.
- Score: 46.99165837639182
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Several decades ago, Support Vector Machines (SVMs) were introduced for
performing binary classification tasks, under a supervised framework. Nowadays,
they often outperform other supervised methods and remain one of the most
popular approaches in the machine learning arena. In this work, we investigate
the training of SVMs through a smooth sparse-promoting-regularized squared
hinge loss minimization. This choice paves the way to the application of quick
training methods built on majorization-minimization approaches, benefiting from
the Lipschitz differentiabililty of the loss function. Moreover, the proposed
approach allows us to handle sparsity-preserving regularizers promoting the
selection of the most significant features, so enhancing the performance.
Numerical tests and comparisons conducted on three different datasets
demonstrate the good performance of the proposed methodology in terms of
qualitative metrics (accuracy, precision, recall, and F 1 score) as well as
computational cost.
Related papers
- Improving the Evaluation and Actionability of Explanation Methods for Multivariate Time Series Classification [4.588028371034407]
We focus on analyzing InterpretTime, a recent evaluation methodology for attribution methods applied to MTSC.
We showcase some significant weaknesses of the original methodology and propose ideas to improve its accuracy and efficiency.
We find that perturbation-based methods such as SHAP and Feature Ablation work well across a set of datasets.
arXiv Detail & Related papers (2024-06-18T11:18:46Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Smooth Ranking SVM via Cutting-Plane Method [6.946903076677842]
We develop a prototype learning approach that relies on cutting-plane method, similar to Ranking SVM, to maximize AUC.
Our algorithm learns simpler models by iteratively introducing cutting planes, thus overfitting is prevented in an unconventional way.
Based on the experiments conducted on 73 binary classification datasets, our method yields the best test AUC in 25 datasets among its relevant competitors.
arXiv Detail & Related papers (2024-01-25T18:47:23Z) - Multi-class Support Vector Machine with Maximizing Minimum Margin [67.51047882637688]
Support Vector Machine (SVM) is a prominent machine learning technique widely applied in pattern recognition tasks.
We propose a novel method for multi-class SVM that incorporates pairwise class loss considerations and maximizes the minimum margin.
Empirical evaluations demonstrate the effectiveness and superiority of our proposed method over existing multi-classification methods.
arXiv Detail & Related papers (2023-12-11T18:09:55Z) - Efficient Training of One Class Classification-SVMs [0.0]
This study examines the use of a highly effective training method to conduct one-class classification.
In this paper, an effective algorithm for dual soft-margin one-class SVM training is presented.
arXiv Detail & Related papers (2023-09-28T15:35:16Z) - An alternative to SVM Method for Data Classification [0.0]
Support vector machine (SVM) is a popular kernel method for data classification.
The method suffers from some weaknesses including; time processing, risk of failure of the optimization process for high dimension cases.
In this paper an alternative method is proposed having a similar performance, with a sensitive improvement of the aforementioned shortcomings.
arXiv Detail & Related papers (2023-08-20T14:09:01Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.