Multi-Label Knowledge Distillation
- URL: http://arxiv.org/abs/2308.06453v1
- Date: Sat, 12 Aug 2023 03:19:08 GMT
- Title: Multi-Label Knowledge Distillation
- Authors: Penghui Yang, Ming-Kun Xie, Chen-Chen Zong, Lei Feng, Gang Niu,
Masashi Sugiyama, Sheng-Jun Huang
- Abstract summary: We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
- Score: 86.03990467785312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing knowledge distillation methods typically work by imparting the
knowledge of output logits or intermediate feature maps from the teacher
network to the student network, which is very successful in multi-class
single-label learning. However, these methods can hardly be extended to the
multi-label learning scenario, where each instance is associated with multiple
semantic labels, because the prediction probabilities do not sum to one and
feature maps of the whole example may ignore minor classes in such a scenario.
In this paper, we propose a novel multi-label knowledge distillation method. On
one hand, it exploits the informative semantic knowledge from the logits by
dividing the multi-label learning problem into a set of binary classification
problems; on the other hand, it enhances the distinctiveness of the learned
feature representations by leveraging the structural information of label-wise
embeddings. Experimental results on multiple benchmark datasets validate that
the proposed method can avoid knowledge counteraction among labels, thus
achieving superior performance against diverse comparing methods. Our code is
available at: https://github.com/penghui-yang/L2D
Related papers
- Determined Multi-Label Learning via Similarity-Based Prompt [12.428779617221366]
In multi-label classification, each training instance is associated with multiple class labels simultaneously.
To alleviate this problem, a novel labeling setting termed textitDetermined Multi-Label Learning (DMLL) is proposed.
arXiv Detail & Related papers (2024-03-25T07:08:01Z) - Query-Based Knowledge Sharing for Open-Vocabulary Multi-Label
Classification [5.985859108787149]
Multi-label zero-shot learning is a non-trivial task in computer vision.
We propose a novel query-based knowledge sharing paradigm for this task.
Our framework significantly outperforms state-of-the-art methods on zero-shot task by 5.9% and 4.5% in mAP on the NUS-WIDE and Open Images.
arXiv Detail & Related papers (2024-01-02T12:18:40Z) - Reliable Representations Learning for Incomplete Multi-View Partial Multi-Label Classification [78.15629210659516]
In this paper, we propose an incomplete multi-view partial multi-label classification network named RANK.
We break through the view-level weights inherent in existing methods and propose a quality-aware sub-network to dynamically assign quality scores to each view of each sample.
Our model is not only able to handle complete multi-view multi-label datasets, but also works on datasets with missing instances and labels.
arXiv Detail & Related papers (2023-03-30T03:09:25Z) - Open-Vocabulary Multi-Label Classification via Multi-modal Knowledge
Transfer [55.885555581039895]
Multi-label zero-shot learning (ML-ZSL) focuses on transferring knowledge by a pre-trained textual label embedding.
We propose a novel open-vocabulary framework, named multimodal knowledge transfer (MKT) for multi-label classification.
arXiv Detail & Related papers (2022-07-05T08:32:18Z) - Trustable Co-label Learning from Multiple Noisy Annotators [68.59187658490804]
Supervised deep learning depends on massive accurately annotated examples.
A typical alternative is learning from multiple noisy annotators.
This paper proposes a data-efficient approach, called emphTrustable Co-label Learning (TCL)
arXiv Detail & Related papers (2022-03-08T16:57:00Z) - Open-Set Representation Learning through Combinatorial Embedding [62.05670732352456]
We are interested in identifying novel concepts in a dataset through representation learning based on the examples in both labeled and unlabeled classes.
We propose a learning approach, which naturally clusters examples in unseen classes using the compositional knowledge given by multiple supervised meta-classifiers on heterogeneous label spaces.
The proposed algorithm discovers novel concepts via a joint optimization of enhancing the discrimitiveness of unseen classes as well as learning the representations of known classes generalizable to novel ones.
arXiv Detail & Related papers (2021-06-29T11:51:57Z) - Interpretation of multi-label classification models using shapley values [0.5482532589225552]
This work further extends the explanation of multi-label classification task by using the SHAP methodology.
The experiment demonstrates a comprehensive comparision of different algorithms on well known multi-label datasets.
arXiv Detail & Related papers (2021-04-21T12:51:12Z) - Noisy Labels Can Induce Good Representations [53.47668632785373]
We study how architecture affects learning with noisy labels.
We show that training with noisy labels can induce useful hidden representations, even when the model generalizes poorly.
This finding leads to a simple method to improve models trained on noisy labels.
arXiv Detail & Related papers (2020-12-23T18:58:05Z) - Multi-label Few/Zero-shot Learning with Knowledge Aggregated from
Multiple Label Graphs [8.44680447457879]
We present a simple multi-graph aggregation model that fuses knowledge from multiple label graphs encoding different semantic label relationships.
We show that methods equipped with the multi-graph knowledge aggregation achieve significant performance improvement across almost all the measures on few/zero-shot labels.
arXiv Detail & Related papers (2020-10-15T01:15:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.