MLPSVM:A new parallel support vector machine to multi-label learning
- URL: http://arxiv.org/abs/2004.05849v1
- Date: Mon, 13 Apr 2020 10:04:25 GMT
- Title: MLPSVM:A new parallel support vector machine to multi-label learning
- Authors: Yanghong Liu and Jia Lu and Tingting Li
- Abstract summary: This paper proposes a multi-label learning algorithm that can also be used for single-label classification.
It is based on standard support vector machines and changes the original single decision hyperplane into two parallel decision hyper-planes, which call multi-label parallel support vector machine (MLPSVM)
- Score: 2.370531727442524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-label learning has attracted the attention of the machine learning
community. The problem conversion method Binary Relevance converts a familiar
single label into a multi-label algorithm. The binary relevance method is
widely used because of its simple structure and efficient algorithm. But binary
relevance does not consider the links between labels, making it cumbersome to
handle some tasks. This paper proposes a multi-label learning algorithm that
can also be used for single-label classification. It is based on standard
support vector machines and changes the original single decision hyperplane
into two parallel decision hyper-planes, which call multi-label parallel
support vector machine (MLPSVM). At the end of the article, MLPSVM is compared
with other multi-label learning algorithms. The experimental results show that
the algorithm performs well on data sets.
Related papers
- Automated Machine Learning for Multi-Label Classification [3.2634122554914002]
We devise a novel AutoML approach for single-label classification tasks, consisting of two algorithms at most.
We investigate how well AutoML approaches that form the state of the art for single-label classification tasks scale with the increased problem complexity of AutoML for multi-label classification.
arXiv Detail & Related papers (2024-02-28T09:40:36Z) - Scalable Label Distribution Learning for Multi-Label Classification [43.52928088881866]
Multi-label classification (MLC) refers to the problem of tagging a given instance with a set of relevant labels.
Most existing MLC methods are based on the assumption that the correlation of two labels in each label pair is symmetric.
Most existing methods design learning processes associated with the number of labels, which makes their computational complexity a bottleneck when scaling up to large-scale output space.
arXiv Detail & Related papers (2023-11-28T06:52:53Z) - Multi-Label Noise Transition Matrix Estimation with Label Correlations:
Theory and Algorithm [73.94839250910977]
Noisy multi-label learning has garnered increasing attention due to the challenges posed by collecting large-scale accurate labels.
The introduction of transition matrices can help model multi-label noise and enable the development of statistically consistent algorithms.
We propose a novel estimator that leverages label correlations without the need for anchor points or precise fitting of noisy class posteriors.
arXiv Detail & Related papers (2023-09-22T08:35:38Z) - Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - Complementary to Multiple Labels: A Correlation-Aware Correction
Approach [65.59584909436259]
We show theoretically how the estimated transition matrix in multi-class CLL could be distorted in multi-labeled cases.
We propose a two-step method to estimate the transition matrix from candidate labels.
arXiv Detail & Related papers (2023-02-25T04:48:48Z) - Multi-Instance Partial-Label Learning: Towards Exploiting Dual Inexact
Supervision [53.530957567507365]
In some real-world tasks, each training sample is associated with a candidate label set that contains one ground-truth label and some false positive labels.
In this paper, we formalize such problems as multi-instance partial-label learning (MIPL)
Existing multi-instance learning algorithms and partial-label learning algorithms are suboptimal for solving MIPL problems.
arXiv Detail & Related papers (2022-12-18T03:28:51Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Label Disentanglement in Partition-based Extreme Multilabel
Classification [111.25321342479491]
We show that the label assignment problem in partition-based XMC can be formulated as an optimization problem.
We show that our method can successfully disentangle multi-modal labels, leading to state-of-the-art (SOTA) results on four XMC benchmarks.
arXiv Detail & Related papers (2021-06-24T03:24:18Z) - A fast learning algorithm for One-Class Slab Support Vector Machines [1.1613446814180841]
This paper proposes fast training method for One Class Slab SVMs using an updated Sequential Minimal Optimization (SMO)
The results indicate that this training method scales better to large sets of training data than other Quadratic Programming (QP) solvers.
arXiv Detail & Related papers (2020-11-06T09:16:39Z) - Multilabel Classification by Hierarchical Partitioning and
Data-dependent Grouping [33.48217977134427]
We exploit the sparsity of label vectors and the hierarchical structure to embed them in low-dimensional space.
We present a novel data-dependent grouping approach, where we use a group construction based on a low-rank Nonnegative Matrix Factorization.
We then present a hierarchical partitioning approach that exploits the label hierarchy in large scale problems to divide up the large label space and create smaller sub-problems.
arXiv Detail & Related papers (2020-06-24T22:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.