Interpretation of multi-label classification models using shapley values
- URL: http://arxiv.org/abs/2104.10505v1
- Date: Wed, 21 Apr 2021 12:51:12 GMT
- Title: Interpretation of multi-label classification models using shapley values
- Authors: Shikun Chen
- Abstract summary: This work further extends the explanation of multi-label classification task by using the SHAP methodology.
The experiment demonstrates a comprehensive comparision of different algorithms on well known multi-label datasets.
- Score: 0.5482532589225552
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-label classification is a type of classification task, it is used when
there are two or more classes, and the data point we want to predict may belong
to none of the classes or all of them at the same time. In the real world, many
applications are actually multi-label involved, including information
retrieval, multimedia content annotation, web mining, and so on. A game
theory-based framework known as SHapley Additive exPlanations (SHAP) has been
applied to explain various supervised learning models without being aware of
the exact model. Herein, this work further extends the explanation of
multi-label classification task by using the SHAP methodology. The experiment
demonstrates a comprehensive comparision of different algorithms on well known
multi-label datasets and shows the usefulness of the interpretation.
Related papers
- Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - Reliable Representations Learning for Incomplete Multi-View Partial Multi-Label Classification [78.15629210659516]
In this paper, we propose an incomplete multi-view partial multi-label classification network named RANK.
We break through the view-level weights inherent in existing methods and propose a quality-aware sub-network to dynamically assign quality scores to each view of each sample.
Our model is not only able to handle complete multi-view multi-label datasets, but also works on datasets with missing instances and labels.
arXiv Detail & Related papers (2023-03-30T03:09:25Z) - A Multi-label Continual Learning Framework to Scale Deep Learning
Approaches for Packaging Equipment Monitoring [57.5099555438223]
We study multi-label classification in the continual scenario for the first time.
We propose an efficient approach that has a logarithmic complexity with regard to the number of tasks.
We validate our approach on a real-world multi-label Forecasting problem from the packaging industry.
arXiv Detail & Related papers (2022-08-08T15:58:39Z) - Multi-Label Image Classification with Contrastive Learning [57.47567461616912]
We show that a direct application of contrastive learning can hardly improve in multi-label cases.
We propose a novel framework for multi-label classification with contrastive learning in a fully supervised setting.
arXiv Detail & Related papers (2021-07-24T15:00:47Z) - Open-Set Representation Learning through Combinatorial Embedding [62.05670732352456]
We are interested in identifying novel concepts in a dataset through representation learning based on the examples in both labeled and unlabeled classes.
We propose a learning approach, which naturally clusters examples in unseen classes using the compositional knowledge given by multiple supervised meta-classifiers on heterogeneous label spaces.
The proposed algorithm discovers novel concepts via a joint optimization of enhancing the discrimitiveness of unseen classes as well as learning the representations of known classes generalizable to novel ones.
arXiv Detail & Related papers (2021-06-29T11:51:57Z) - ECKPN: Explicit Class Knowledge Propagation Network for Transductive
Few-shot Learning [53.09923823663554]
Class-level knowledge can be easily learned by humans from just a handful of samples.
We propose an Explicit Class Knowledge Propagation Network (ECKPN) to address this problem.
We conduct extensive experiments on four few-shot classification benchmarks, and the experimental results show that the proposed ECKPN significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-06-16T02:29:43Z) - Multi-label Few/Zero-shot Learning with Knowledge Aggregated from
Multiple Label Graphs [8.44680447457879]
We present a simple multi-graph aggregation model that fuses knowledge from multiple label graphs encoding different semantic label relationships.
We show that methods equipped with the multi-graph knowledge aggregation achieve significant performance improvement across almost all the measures on few/zero-shot labels.
arXiv Detail & Related papers (2020-10-15T01:15:43Z) - Learning Image Labels On-the-fly for Training Robust Classification
Models [13.669654965671604]
We show how noisy annotations (e.g., from different algorithm-based labelers) can be utilized together and mutually benefit the learning of classification tasks.
A meta-training based label-sampling module is designed to attend the labels that benefit the model learning the most through additional back-propagation processes.
arXiv Detail & Related papers (2020-09-22T05:38:44Z) - Metrics for Multi-Class Classification: an Overview [0.9176056742068814]
Classification tasks involving more than two classes are known as "multi-class classification"
Performance indicators are very useful when the aim is to evaluate and compare different classification models or machine learning techniques.
arXiv Detail & Related papers (2020-08-13T08:41:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.