Multi-view learning with privileged weighted twin support vector machine
- URL: http://arxiv.org/abs/2201.11306v1
- Date: Thu, 27 Jan 2022 03:49:53 GMT
- Title: Multi-view learning with privileged weighted twin support vector machine
- Authors: Ruxin Xu, Huiru Wang
- Abstract summary: Weighted twin support vector machines (WLTSVM) mines as much potential similarity information in samples as possible to improve the common short-coming of non-parallel plane classifiers.
Compared with twin support vector machines (TWSVM), it reduces the time complexity by deleting the superfluous constraints using the inter-class K-Nearest Neighbor (KNN)
In this paper, we propose multi-view learning with privileged weighted twin support vector machines (MPWTSVM)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Weighted twin support vector machines (WLTSVM) mines as much potential
similarity information in samples as possible to improve the common
short-coming of non-parallel plane classifiers. Compared with twin support
vector machines (TWSVM), it reduces the time complexity by deleting the
superfluous constraints using the inter-class K-Nearest Neighbor (KNN).
Multi-view learning (MVL) is a newly developing direction of machine learning,
which focuses on learning acquiring information from the data indicated by
multiple feature sets. In this paper, we propose multi-view learning with
privileged weighted twin support vector machines (MPWTSVM). It not only
inherits the advantages of WLTSVM but also has its characteristics. Firstly, it
enhances generalization ability by mining intra-class information from the same
perspective. Secondly, it reduces the redundancy constraints with the help of
inter-class information, thus improving the running speed. Most importantly, it
can follow both the consensus and the complementarity principle simultaneously
as a multi-view classification model. The consensus principle is realized by
minimizing the coupling items of the two views in the original objective
function. The complementary principle is achieved by establishing privileged
information paradigms and MVL. A standard quadratic programming solver is used
to solve the problem. Compared with multi-view classification models such as
SVM-2K, MVTSVM, MCPK, and PSVM-2V, our model has better accuracy and
classification efficiency. Experimental results on 45 binary data sets prove
the effectiveness of our method.
Related papers
- Queryable Prototype Multiple Instance Learning with Vision-Language Models for Incremental Whole Slide Image Classification [10.667645628712542]
This paper proposes the first Vision-Language-based framework with Queryable Prototype Multiple Instance Learning (QPMIL-VL) specially designed for incremental WSI classification.
experiments on four TCGA datasets demonstrate that our QPMIL-VL framework is effective for incremental WSI classification.
arXiv Detail & Related papers (2024-10-14T14:49:34Z) - S^2MVTC: a Simple yet Efficient Scalable Multi-View Tensor Clustering [38.35594663863098]
Experimental results on six large-scale multi-view datasets demonstrate that S2MVTC significantly outperforms state-of-the-art algorithms in terms of clustering performance and CPU execution time.
arXiv Detail & Related papers (2024-03-14T05:00:29Z) - Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs [65.42104819071444]
Multitask learning (MTL) leverages task-relatedness to enhance performance.
We employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices.
We propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs)
arXiv Detail & Related papers (2023-08-30T14:28:26Z) - Enhancing Pattern Classification in Support Vector Machines through
Matrix Formulation [0.0]
The reliance on vector-based formulations in existing SVM-based models poses limitations regarding flexibility and ease of incorporating additional terms to handle specific challenges.
We introduce a matrix formulation for SVM that effectively addresses these constraints.
Experimental evaluations on multilabel and multiclass datasets demonstrate that Matrix SVM achieves superior time efficacy.
arXiv Detail & Related papers (2023-07-18T15:56:39Z) - Dual Learning for Large Vocabulary On-Device ASR [64.10124092250128]
Dual learning is a paradigm for semi-supervised machine learning that seeks to leverage unsupervised data by solving two opposite tasks at once.
We provide an analysis of an on-device-sized streaming conformer trained on the entirety of Librispeech, showing relative WER improvements of 10.7%/5.2% without an LM and 11.7%/16.4% with an LM.
arXiv Detail & Related papers (2023-01-11T06:32:28Z) - GraphLearner: Graph Node Clustering with Fully Learnable Augmentation [76.63963385662426]
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters.
We propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner.
It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - A fast learning algorithm for One-Class Slab Support Vector Machines [1.1613446814180841]
This paper proposes fast training method for One Class Slab SVMs using an updated Sequential Minimal Optimization (SMO)
The results indicate that this training method scales better to large sets of training data than other Quadratic Programming (QP) solvers.
arXiv Detail & Related papers (2020-11-06T09:16:39Z) - Dual Adversarial Auto-Encoders for Clustering [152.84443014554745]
We propose Dual Adversarial Auto-encoder (Dual-AAE) for unsupervised clustering.
By performing variational inference on the objective function of Dual-AAE, we derive a new reconstruction loss which can be optimized by training a pair of Auto-encoders.
Experiments on four benchmarks show that Dual-AAE achieves superior performance over state-of-the-art clustering methods.
arXiv Detail & Related papers (2020-08-23T13:16:34Z) - Optimally Combining Classifiers for Semi-Supervised Learning [43.77365242185884]
We propose a new semi-supervised learning method that is able to adaptively combine the strengths of Xgboost and transductive support vector machine.
The experimental results on the UCI data sets and real commercial data set demonstrate the superior classification performance of our method over the five state-of-the-art algorithms.
arXiv Detail & Related papers (2020-06-07T09:28:34Z) - On Coresets for Support Vector Machines [61.928187390362176]
A coreset is a small, representative subset of the original data points.
We show that our algorithm can be used to extend the applicability of any off-the-shelf SVM solver to streaming, distributed, and dynamic data settings.
arXiv Detail & Related papers (2020-02-15T23:25:12Z) - Multi-view Deep Subspace Clustering Networks [64.29227045376359]
Multi-view subspace clustering aims to discover the inherent structure of data by fusing multiple views of complementary information.
We propose the Multi-view Deep Subspace Clustering Networks (MvDSCN), which learns a multi-view self-representation matrix in an end-to-end manner.
The MvDSCN unifies multiple backbones to boost clustering performance and avoid the need for model selection.
arXiv Detail & Related papers (2019-08-06T06:44:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.