Multi-label Few/Zero-shot Learning with Knowledge Aggregated from
Multiple Label Graphs
- URL: http://arxiv.org/abs/2010.07459v1
- Date: Thu, 15 Oct 2020 01:15:43 GMT
- Title: Multi-label Few/Zero-shot Learning with Knowledge Aggregated from
Multiple Label Graphs
- Authors: Jueqing Lu, Lan Du, Ming Liu, Joanna Dipnall
- Abstract summary: We present a simple multi-graph aggregation model that fuses knowledge from multiple label graphs encoding different semantic label relationships.
We show that methods equipped with the multi-graph knowledge aggregation achieve significant performance improvement across almost all the measures on few/zero-shot labels.
- Score: 8.44680447457879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few/Zero-shot learning is a big challenge of many classifications tasks,
where a classifier is required to recognise instances of classes that have very
few or even no training samples. It becomes more difficult in multi-label
classification, where each instance is labelled with more than one class. In
this paper, we present a simple multi-graph aggregation model that fuses
knowledge from multiple label graphs encoding different semantic label
relationships in order to study how the aggregated knowledge can benefit
multi-label zero/few-shot document classification. The model utilises three
kinds of semantic information, i.e., the pre-trained word embeddings, label
description, and pre-defined label relations. Experimental results derived on
two large clinical datasets (i.e., MIMIC-II and MIMIC-III) and the EU
legislation dataset show that methods equipped with the multi-graph knowledge
aggregation achieve significant performance improvement across almost all the
measures on few/zero-shot labels.
Related papers
- Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - Deep Partial Multi-Label Learning with Graph Disambiguation [27.908565535292723]
We propose a novel deep Partial multi-Label model with grAph-disambIguatioN (PLAIN)
Specifically, we introduce the instance-level and label-level similarities to recover label confidences.
At each training epoch, labels are propagated on the instance and label graphs to produce relatively accurate pseudo-labels.
arXiv Detail & Related papers (2023-05-10T04:02:08Z) - Reliable Representations Learning for Incomplete Multi-View Partial Multi-Label Classification [78.15629210659516]
In this paper, we propose an incomplete multi-view partial multi-label classification network named RANK.
We break through the view-level weights inherent in existing methods and propose a quality-aware sub-network to dynamically assign quality scores to each view of each sample.
Our model is not only able to handle complete multi-view multi-label datasets, but also works on datasets with missing instances and labels.
arXiv Detail & Related papers (2023-03-30T03:09:25Z) - An Effective Approach for Multi-label Classification with Missing Labels [8.470008570115146]
We propose a pseudo-label based approach to reduce the cost of annotation without bringing additional complexity to the classification networks.
By designing a novel loss function, we are able to relax the requirement that each instance must contain at least one positive label.
We show that our method can handle the imbalance between positive labels and negative labels, while still outperforming existing missing-label learning approaches.
arXiv Detail & Related papers (2022-10-24T23:13:57Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Meta-Learning for Multi-Label Few-Shot Classification [38.222736913855115]
This work targets the problem of multi-label meta-learning, where a model learns to predict multiple labels within a query.
We introduce a neural module to estimate the label count of a given sample by exploiting the relational inference.
Overall, our thorough experiments suggest that the proposed label-propagation algorithm in conjunction with the neural label count module (NLC) shall be considered as the method of choice.
arXiv Detail & Related papers (2021-10-26T08:47:48Z) - Interpretation of multi-label classification models using shapley values [0.5482532589225552]
This work further extends the explanation of multi-label classification task by using the SHAP methodology.
The experiment demonstrates a comprehensive comparision of different algorithms on well known multi-label datasets.
arXiv Detail & Related papers (2021-04-21T12:51:12Z) - Few-shot Learning for Multi-label Intent Detection [59.66787898744991]
State-of-the-art work estimates label-instance relevance scores and uses a threshold to select multiple associated intent labels.
Experiments on two datasets show that the proposed model significantly outperforms strong baselines in both one-shot and five-shot settings.
arXiv Detail & Related papers (2020-10-11T14:42:18Z) - An Empirical Study on Large-Scale Multi-Label Text Classification
Including Few and Zero-Shot Labels [49.036212158261215]
Large-scale Multi-label Text Classification (LMTC) has a wide range of Natural Language Processing (NLP) applications.
Current state-of-the-art LMTC models employ Label-Wise Attention Networks (LWANs)
We show that hierarchical methods based on Probabilistic Label Trees (PLTs) outperform LWANs.
We propose a new state-of-the-art method which combines BERT with LWANs.
arXiv Detail & Related papers (2020-10-04T18:55:47Z) - Knowledge-Guided Multi-Label Few-Shot Learning for General Image
Recognition [75.44233392355711]
KGGR framework exploits prior knowledge of statistical label correlations with deep neural networks.
It first builds a structured knowledge graph to correlate different labels based on statistical label co-occurrence.
Then, it introduces the label semantics to guide learning semantic-specific features.
It exploits a graph propagation network to explore graph node interactions.
arXiv Detail & Related papers (2020-09-20T15:05:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.