Fast Multi-label Learning
- URL: http://arxiv.org/abs/2108.13570v1
- Date: Tue, 31 Aug 2021 01:07:42 GMT
- Title: Fast Multi-label Learning
- Authors: Xiuwen Gong, Dong Yuan, Wei Bao
- Abstract summary: The goal of this paper is to provide a simple method, yet with provable guarantees, which can achieve competitive performance without a complex training process.
- Score: 19.104773591885884
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Embedding approaches have become one of the most pervasive techniques for
multi-label classification. However, the training process of embedding methods
usually involves a complex quadratic or semidefinite programming problem, or
the model may even involve an NP-hard problem. Thus, such methods are
prohibitive on large-scale applications. More importantly, much of the
literature has already shown that the binary relevance (BR) method is usually
good enough for some applications. Unfortunately, BR runs slowly due to its
linear dependence on the size of the input data. The goal of this paper is to
provide a simple method, yet with provable guarantees, which can achieve
competitive performance without a complex training process. To achieve our
goal, we provide a simple stochastic sketch strategy for multi-label
classification and present theoretical results from both algorithmic and
statistical learning perspectives. Our comprehensive empirical studies
corroborate our theoretical findings and demonstrate the superiority of the
proposed methods.
Related papers
- ProPML: Probability Partial Multi-label Learning [12.814910734614351]
Partial Multi-label Learning (PML) is a type of weakly supervised learning where each training instance corresponds to a set of candidate labels, among which only some are true.
In this paper, we introduce our, a novel probabilistic approach to this problem that extends the binary cross entropy to the PML setup.
arXiv Detail & Related papers (2024-03-12T12:40:23Z) - Efficient Training of One Class Classification-SVMs [0.0]
This study examines the use of a highly effective training method to conduct one-class classification.
In this paper, an effective algorithm for dual soft-margin one-class SVM training is presented.
arXiv Detail & Related papers (2023-09-28T15:35:16Z) - An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning [58.59343434538218]
We propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective.
Our approach can be implemented in just few lines of code by only using off-the-shelf operations.
arXiv Detail & Related papers (2022-09-28T02:11:34Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Risk Consistent Multi-Class Learning from Label Proportions [64.0125322353281]
This study addresses a multiclass learning from label proportions (MCLLP) setting in which training instances are provided in bags.
Most existing MCLLP methods impose bag-wise constraints on the prediction of instances or assign them pseudo-labels.
A risk-consistent method is proposed for instance classification using the empirical risk minimization framework.
arXiv Detail & Related papers (2022-03-24T03:49:04Z) - Efficient Performance Bounds for Primal-Dual Reinforcement Learning from
Demonstrations [1.0609815608017066]
We consider large-scale Markov decision processes with an unknown cost function and address the problem of learning a policy from a finite set of expert demonstrations.
Existing inverse reinforcement learning methods come with strong theoretical guarantees, but are computationally expensive.
We introduce a novel bilinear saddle-point framework using Lagrangian duality to bridge the gap between theory and practice.
arXiv Detail & Related papers (2021-12-28T05:47:24Z) - Unsupervised Learning for Robust Fitting:A Reinforcement Learning
Approach [25.851792661168698]
We introduce a novel framework that learns to solve robust model fitting.
Unlike other methods, our work is agnostic to the underlying input features.
We empirically show that our method outperforms existing learning approaches.
arXiv Detail & Related papers (2021-03-05T07:14:00Z) - Better scalability under potentially heavy-tailed feedback [6.903929927172917]
We study scalable alternatives to robust gradient descent (RGD) techniques that can be used when the losses and/or gradients can be heavy-tailed.
We focus computational effort on robustly choosing a strong candidate based on a collection of cheap sub-processes which can be run in parallel.
The exact selection process depends on the convexity of the underlying objective, but in all cases, our selection technique amounts to a robust form of boosting the confidence of weak learners.
arXiv Detail & Related papers (2020-12-14T08:56:04Z) - An Online Method for A Class of Distributionally Robust Optimization
with Non-Convex Objectives [54.29001037565384]
We propose a practical online method for solving a class of online distributionally robust optimization (DRO) problems.
Our studies demonstrate important applications in machine learning for improving the robustness of networks.
arXiv Detail & Related papers (2020-06-17T20:19:25Z) - Deep Unfolding Network for Image Super-Resolution [159.50726840791697]
This paper proposes an end-to-end trainable unfolding network which leverages both learning-based methods and model-based methods.
The proposed network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model.
arXiv Detail & Related papers (2020-03-23T17:55:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.