Semi-Supervised Hierarchical Multi-Label Classifier Based on Local Information
- URL: http://arxiv.org/abs/2405.00184v1
- Date: Tue, 30 Apr 2024 20:16:40 GMT
- Title: Semi-Supervised Hierarchical Multi-Label Classifier Based on Local Information
- Authors: Jonathan Serrano-PĂ©rez, L. Enrique Sucar,
- Abstract summary: Semi-supervised hierarchical multi-label classifier based on local information (SSHMC-BLI)
SSHMC-BLI builds pseudo-labels for each unlabeled instance from the paths of labels of its labeled neighbors.
Experiments on 12 challenging datasets from functional genomics show that making use of unlabeled along with labeled data can help to improve the performance of a supervised hierarchical classifier trained only on labeled data.
- Score: 1.6574413179773761
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Scarcity of labeled data is a common problem in supervised classification, since hand-labeling can be time consuming, expensive or hard to label; on the other hand, large amounts of unlabeled information can be found. The problem of scarcity of labeled data is even more notorious in hierarchical classification, because the data of a node is split among its children, which results in few instances associated to the deepest nodes of the hierarchy. In this work it is proposed the semi-supervised hierarchical multi-label classifier based on local information (SSHMC-BLI) which can be trained with labeled and unlabeled data to perform hierarchical classification tasks. The method can be applied to any type of hierarchical problem, here we focus on the most difficult case: hierarchies of DAG type, where the instances can be associated to multiple paths of labels which can finish in an internal node. SSHMC-BLI builds pseudo-labels for each unlabeled instance from the paths of labels of its labeled neighbors, while it considers whether the unlabeled instance is similar to its neighbors. Experiments on 12 challenging datasets from functional genomics show that making use of unlabeled along with labeled data can help to improve the performance of a supervised hierarchical classifier trained only on labeled data, even with statistical significance.
Related papers
- Towards Imbalanced Large Scale Multi-label Classification with Partially
Annotated Labels [8.977819892091]
Multi-label classification is a widely encountered problem in daily life, where an instance can be associated with multiple classes.
In this work, we address the issue of label imbalance and investigate how to train neural networks using partial labels.
arXiv Detail & Related papers (2023-07-31T21:50:48Z) - SEAL: Simultaneous Label Hierarchy Exploration And Learning [9.701914280306118]
We propose a new framework that explores the label hierarchy by augmenting the observed labels with latent labels that follow a prior hierarchical structure.
Our approach uses a 1-Wasserstein metric over the tree metric space as an objective function, which enables us to simultaneously learn a data-driven label hierarchy and perform (semi-supervised) learning.
arXiv Detail & Related papers (2023-04-26T08:31:59Z) - Bridging the Gap between Model Explanations in Partially Annotated
Multi-label Classification [85.76130799062379]
We study how false negative labels affect the model's explanation.
We propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.
arXiv Detail & Related papers (2023-04-04T14:00:59Z) - TagRec++: Hierarchical Label Aware Attention Network for Question
Categorization [0.3683202928838613]
Online learning systems organize the content according to a well defined taxonomy of hierarchical nature.
The task of categorizing inputs to the hierarchical labels is usually cast as a flat multi-class classification problem.
We formulate the task as a dense retrieval problem to retrieve the appropriate hierarchical labels for each content.
arXiv Detail & Related papers (2022-08-10T05:08:37Z) - Large Loss Matters in Weakly Supervised Multi-Label Classification [50.262533546999045]
We first regard unobserved labels as negative labels, casting the W task into noisy multi-label classification.
We propose novel methods for W which reject or correct the large loss samples to prevent model from memorizing the noisy label.
Our methodology actually works well, validating that treating large loss properly matters in a weakly supervised multi-label classification.
arXiv Detail & Related papers (2022-06-08T08:30:24Z) - Use All The Labels: A Hierarchical Multi-Label Contrastive Learning
Framework [75.79736930414715]
We present a hierarchical multi-label representation learning framework that can leverage all available labels and preserve the hierarchical relationship between classes.
We introduce novel hierarchy preserving losses, which jointly apply a hierarchical penalty to the contrastive loss, and enforce the hierarchy constraint.
arXiv Detail & Related papers (2022-04-27T21:41:44Z) - Evaluating Multi-label Classifiers with Noisy Labels [0.7868449549351487]
In the real world, it is more common to deal with noisy datasets than clean datasets.
We present a Context-Based Multi-Label-Classifier (CbMLC) that effectively handles noisy labels.
We show CbMLC yields substantial improvements over the previous methods in most cases.
arXiv Detail & Related papers (2021-02-16T19:50:52Z) - MATCH: Metadata-Aware Text Classification in A Large Hierarchy [60.59183151617578]
MATCH is an end-to-end framework that leverages both metadata and hierarchy information.
We propose different ways to regularize the parameters and output probability of each child label by its parents.
Experiments on two massive text datasets with large-scale label hierarchies demonstrate the effectiveness of MATCH.
arXiv Detail & Related papers (2021-02-15T05:23:08Z) - PseudoSeg: Designing Pseudo Labels for Semantic Segmentation [78.35515004654553]
We present a re-design of pseudo-labeling to generate structured pseudo labels for training with unlabeled or weakly-labeled data.
We demonstrate the effectiveness of the proposed pseudo-labeling strategy in both low-data and high-data regimes.
arXiv Detail & Related papers (2020-10-19T17:59:30Z) - Multilabel Classification by Hierarchical Partitioning and
Data-dependent Grouping [33.48217977134427]
We exploit the sparsity of label vectors and the hierarchical structure to embed them in low-dimensional space.
We present a novel data-dependent grouping approach, where we use a group construction based on a low-rank Nonnegative Matrix Factorization.
We then present a hierarchical partitioning approach that exploits the label hierarchy in large scale problems to divide up the large label space and create smaller sub-problems.
arXiv Detail & Related papers (2020-06-24T22:23:39Z) - Interaction Matching for Long-Tail Multi-Label Classification [57.262792333593644]
We present an elegant and effective approach for addressing limitations in existing multi-label classification models.
By performing soft n-gram interaction matching, we match labels with natural language descriptions.
arXiv Detail & Related papers (2020-05-18T15:27:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.