A Top-down Supervised Learning Approach to Hierarchical Multi-label
Classification in Networks
- URL: http://arxiv.org/abs/2203.12569v1
- Date: Wed, 23 Mar 2022 17:29:17 GMT
- Title: A Top-down Supervised Learning Approach to Hierarchical Multi-label
Classification in Networks
- Authors: Miguel Romero, Jorge Finke, Camilo Rocha
- Abstract summary: This paper presents a general prediction model to hierarchical multi-label classification (HMC), where the attributes to be inferred can be specified as a strict poset.
It is based on a top-down classification approach that addresses hierarchical multi-label classification with supervised learning by building a local classifier per class.
The proposed model is showcased with a case study on the prediction of gene functions for Oryza sativa Japonica, a variety of rice.
- Score: 0.21485350418225244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Node classification is the task of inferring or predicting missing node
attributes from information available for other nodes in a network. This paper
presents a general prediction model to hierarchical multi-label classification
(HMC), where the attributes to be inferred can be specified as a strict poset.
It is based on a top-down classification approach that addresses hierarchical
multi-label classification with supervised learning by building a local
classifier per class. The proposed model is showcased with a case study on the
prediction of gene functions for Oryza sativa Japonica, a variety of rice. It
is compared to the Hierarchical Binomial-Neighborhood, a probabilistic model,
by evaluating both approaches in terms of prediction performance and
computational cost. The results in this work support the working hypothesis
that the proposed model can achieve good levels of prediction efficiency, while
scaling up in relation to the state of the art.
Related papers
- Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals [4.384272169863716]
Interpretability is crucial for machine learning algorithms in high-stakes medical applications.
Attri-Net is an inherently interpretable model for multi-label classification that provides local and global explanations.
arXiv Detail & Related papers (2024-06-08T13:52:02Z) - Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - Inspecting class hierarchies in classification-based metric learning
models [0.0]
We train a softmax classifier and three metric learning models with several training options on benchmark and real-world datasets.
We evaluate the hierarchical inference performance by inspecting learned class representatives and the hierarchy-informed performance, i.e., the classification performance, and the metric learning performance by considering predefined hierarchical structures.
arXiv Detail & Related papers (2023-01-26T12:40:12Z) - Parametric Classification for Generalized Category Discovery: A Baseline
Study [70.73212959385387]
Generalized Category Discovery (GCD) aims to discover novel categories in unlabelled datasets using knowledge learned from labelled samples.
We investigate the failure of parametric classifiers, verify the effectiveness of previous design choices when high-quality supervision is available, and identify unreliable pseudo-labels as a key problem.
We propose a simple yet effective parametric classification method that benefits from entropy regularisation, achieves state-of-the-art performance on multiple GCD benchmarks and shows strong robustness to unknown class numbers.
arXiv Detail & Related papers (2022-11-21T18:47:11Z) - Semi-supervised Predictive Clustering Trees for (Hierarchical) Multi-label Classification [2.706328351174805]
We propose a hierarchical multi-label classification method based on semi-supervised learning of predictive clustering trees.
We also extend the method towards ensemble learning and propose a method based on the random forest approach.
arXiv Detail & Related papers (2022-07-19T12:49:00Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Structured Graph Learning for Clustering and Semi-supervised
Classification [74.35376212789132]
We propose a graph learning framework to preserve both the local and global structure of data.
Our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure.
Our model is equivalent to a combination of kernel k-means and k-means methods under certain condition.
arXiv Detail & Related papers (2020-08-31T08:41:20Z) - Document Ranking with a Pretrained Sequence-to-Sequence Model [56.44269917346376]
We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words"
Our approach significantly outperforms an encoder-only model in a data-poor regime.
arXiv Detail & Related papers (2020-03-14T22:29:50Z) - Sampling Prediction-Matching Examples in Neural Networks: A
Probabilistic Programming Approach [9.978961706999833]
We consider the problem of exploring the prediction level sets of a classifier using probabilistic programming.
We define a prediction level set to be the set of examples for which the predictor has the same specified prediction confidence.
We demonstrate this technique with experiments on a synthetic dataset and MNIST.
arXiv Detail & Related papers (2020-01-09T15:57:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.