SPL-MLL: Selecting Predictable Landmarks for Multi-Label Learning
- URL: http://arxiv.org/abs/2008.06883v1
- Date: Sun, 16 Aug 2020 11:07:44 GMT
- Title: SPL-MLL: Selecting Predictable Landmarks for Multi-Label Learning
- Authors: Junbing Li, Changqing Zhang, Pengfei Zhu, Baoyuan Wu, Lei Chen,
Qinghua Hu
- Abstract summary: We propose to select a small subset of labels as landmarks which are easy to predict according to input (predictable) and can well recover the other possible labels (representative)
We employ the Alternating Direction Method (ADM) to solve our problem. Empirical studies on real-world datasets show that our method achieves superior classification performance over other state-of-the-art methods.
- Score: 87.27700889147144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although significant progress achieved, multi-label classification is still
challenging due to the complexity of correlations among different labels.
Furthermore, modeling the relationships between input and some (dull) classes
further increases the difficulty of accurately predicting all possible labels.
In this work, we propose to select a small subset of labels as landmarks which
are easy to predict according to input (predictable) and can well recover the
other possible labels (representative). Different from existing methods which
separate the landmark selection and landmark prediction in the 2-step manner,
the proposed algorithm, termed Selecting Predictable Landmarks for Multi-Label
Learning (SPL-MLL), jointly conducts landmark selection, landmark prediction,
and label recovery in a unified framework, to ensure both the
representativeness and predictableness for selected landmarks. We employ the
Alternating Direction Method (ADM) to solve our problem. Empirical studies on
real-world datasets show that our method achieves superior classification
performance over other state-of-the-art methods.
Related papers
- Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Adaptive Collaborative Correlation Learning-based Semi-Supervised Multi-Label Feature Selection [25.195711274756334]
We propose an Adaptive Collaborative Correlation lEarning-based Semi-Supervised Multi-label Feature Selection (Access-MFS) method to address these issues.
Specifically, a generalized regression model equipped with an extended uncorrelated constraint is introduced to select discriminative yet irrelevant features.
The correlation instance and label correlation are integrated into the proposed regression model to adaptively learn both the sample similarity graph and the label similarity graph.
arXiv Detail & Related papers (2024-06-18T01:47:38Z) - MuRAL: Multi-Scale Region-based Active Learning for Object Detection [20.478741635006116]
We propose a novel approach called Multi-scale Region-based Active Learning (MuRAL) for object detection.
MuRAL identifies informative regions of various scales to reduce annotation costs for well-learned objects.
Our proposed method surpasses all existing coarse-grained and fine-grained baselines on Cityscapes and MS COCO datasets.
arXiv Detail & Related papers (2023-03-29T12:52:27Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Meta-Learning for Multi-Label Few-Shot Classification [38.222736913855115]
This work targets the problem of multi-label meta-learning, where a model learns to predict multiple labels within a query.
We introduce a neural module to estimate the label count of a given sample by exploiting the relational inference.
Overall, our thorough experiments suggest that the proposed label-propagation algorithm in conjunction with the neural label count module (NLC) shall be considered as the method of choice.
arXiv Detail & Related papers (2021-10-26T08:47:48Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - Few-shot Learning for Multi-label Intent Detection [59.66787898744991]
State-of-the-art work estimates label-instance relevance scores and uses a threshold to select multiple associated intent labels.
Experiments on two datasets show that the proposed model significantly outperforms strong baselines in both one-shot and five-shot settings.
arXiv Detail & Related papers (2020-10-11T14:42:18Z) - Interaction Matching for Long-Tail Multi-Label Classification [57.262792333593644]
We present an elegant and effective approach for addressing limitations in existing multi-label classification models.
By performing soft n-gram interaction matching, we match labels with natural language descriptions.
arXiv Detail & Related papers (2020-05-18T15:27:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.