GL-TSVM: A robust and smooth twin support vector machine with guardian loss function
- URL: http://arxiv.org/abs/2408.16336v1
- Date: Thu, 29 Aug 2024 08:14:20 GMT
- Title: GL-TSVM: A robust and smooth twin support vector machine with guardian loss function
- Authors: Mushir Akhtar, M. Tanveer, Mohd. Arshad,
- Abstract summary: We introduce the guardian loss (G-loss) a novel loss function distinguished by its asymmetric, bounded, and smooth characteristics.
To adhere to the structural risk minimization (SRM) principle, we incorporate a regularization term into the objective function of GL-TSVM.
The experimental analysis on UCI and KEEL datasets substantiates the effectiveness of the proposed GL-TSVM.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Twin support vector machine (TSVM), a variant of support vector machine (SVM), has garnered significant attention due to its $3/4$ times lower computational complexity compared to SVM. However, due to the utilization of the hinge loss function, TSVM is sensitive to outliers or noise. To remedy it, we introduce the guardian loss (G-loss), a novel loss function distinguished by its asymmetric, bounded, and smooth characteristics. We then fuse the proposed G-loss function into the TSVM and yield a robust and smooth classifier termed GL-TSVM. Further, to adhere to the structural risk minimization (SRM) principle and reduce overfitting, we incorporate a regularization term into the objective function of GL-TSVM. To address the optimization challenges of GL-TSVM, we devise an efficient iterative algorithm. The experimental analysis on UCI and KEEL datasets substantiates the effectiveness of the proposed GL-TSVM in comparison to the baseline models. Moreover, to showcase the efficacy of the proposed GL-TSVM in the biomedical domain, we evaluated it on the breast cancer (BreaKHis) and schizophrenia datasets. The outcomes strongly demonstrate the competitiveness of the proposed GL-TSVM against the baseline models.
Related papers
- Enhancing Robustness and Efficiency of Least Square Twin SVM via Granular Computing [0.2999888908665658]
In the domain of machine learning, least square twin support vector machine (LSTSVM) stands out as one of the state-of-the-art models.
LSTSVM suffers from sensitivity to noise and inversions, overlooking the principle and instability in resampling.
We propose the robust granular ball LSTSVM (GBLSTSVM), which is trained using granular balls instead of original data points.
arXiv Detail & Related papers (2024-10-22T18:13:01Z) - Granular Ball Twin Support Vector Machine [0.0]
Nonparametric likelihood Estimator in MixtureTwin support vector machine (TSVM) is an emerging machine learning model with versatile applicability in classification and regression endeavors.
TSVM confronts formidable obstacles to its efficiency and applicability on large-scale datasets.
We propose the granular ball twin support vector machine (GBTSVM) and a novel large-scale granular ball twin support vector machine (LS-GBTSVM)
We conduct a comprehensive evaluation of GBTSVM and LS-GBTSVM models on benchmark datasets from UCI, KEEL, and NDC datasets.
arXiv Detail & Related papers (2024-10-07T06:20:36Z) - Multiview learning with twin parametric margin SVM [0.0]
Multiview learning (MVL) seeks to leverage the benefits of diverse perspectives to complement each other.
We propose multiview twin parametric margin support vector machine (MvTPMSVM)
MvTPMSVM constructs parametric margin hyperplanes corresponding to both classes, aiming to regulate and manage the impact of the heteroscedastic noise structure.
arXiv Detail & Related papers (2024-08-04T10:16:11Z) - AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer [54.713778961605115]
Vision Transformer (ViT) has become one of the most prevailing fundamental backbone networks in the computer vision community.
We propose a novel non-uniform quantizer, dubbed the Adaptive Logarithm AdaLog (AdaLog) quantizer.
arXiv Detail & Related papers (2024-07-17T18:38:48Z) - Advancing Supervised Learning with the Wave Loss Function: A Robust and Smooth Approach [0.0]
We present a novel contribution to the realm of supervised machine learning: an asymmetric loss function named wave loss.
We incorporate the proposed wave loss function into the least squares setting of support vector machines (SVM) and twin support vector machines (TSVM)
To empirically showcase the effectiveness of the proposed Wave-SVM and Wave-TSVM, we evaluate them on benchmark UCI and KEEL datasets.
arXiv Detail & Related papers (2024-04-28T07:32:00Z) - Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs [65.42104819071444]
Multitask learning (MTL) leverages task-relatedness to enhance performance.
We employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices.
We propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs)
arXiv Detail & Related papers (2023-08-30T14:28:26Z) - Greedy based Value Representation for Optimal Coordination in
Multi-agent Reinforcement Learning [64.05646120624287]
We derive the expression of the joint Q value function of LVD and MVD.
To ensure optimal consistency, the optimal node is required to be the unique STN.
Our method outperforms state-of-the-art baselines in experiments on various benchmarks.
arXiv Detail & Related papers (2022-11-22T08:14:50Z) - Handling Imbalanced Classification Problems With Support Vector Machines
via Evolutionary Bilevel Optimization [73.17488635491262]
Support vector machines (SVMs) are popular learning algorithms to deal with binary classification problems.
This article introduces EBCS-SVM: evolutionary bilevel cost-sensitive SVMs.
arXiv Detail & Related papers (2022-04-21T16:08:44Z) - MotionHint: Self-Supervised Monocular Visual Odometry with Motion
Constraints [70.76761166614511]
We present a novel self-supervised algorithm named MotionHint for monocular visual odometry (VO)
Our MotionHint algorithm can be easily applied to existing open-sourced state-of-the-art SSM-VO systems.
arXiv Detail & Related papers (2021-09-14T15:35:08Z) - Estimating Average Treatment Effects with Support Vector Machines [77.34726150561087]
Support vector machine (SVM) is one of the most popular classification algorithms in the machine learning literature.
We adapt SVM as a kernel-based weighting procedure that minimizes the maximum mean discrepancy between the treatment and control groups.
We characterize the bias of causal effect estimation arising from this trade-off, connecting the proposed SVM procedure to the existing kernel balancing methods.
arXiv Detail & Related papers (2021-02-23T20:22:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.