Multi-label learning for dynamic model type recommendation
- URL: http://arxiv.org/abs/2004.00558v1
- Date: Wed, 1 Apr 2020 16:42:12 GMT
- Title: Multi-label learning for dynamic model type recommendation
- Authors: Mariana A. Souza, Robert Sabourin, George D. C. Cavalcanti and Rafael
M. O. Cruz
- Abstract summary: We propose a problem-independent dynamic base-classifier model recommendation for the online local pool (OLP) technique.
Our proposed framework builds a multi-label meta-classifier responsible for recommending a set of relevant model types.
Experimental results show that different data distributions favored different model types on a local scope.
- Score: 13.304462985219237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic selection techniques aim at selecting the local experts around each
test sample in particular for performing its classification. While generating
the classifier on a local scope may make it easier for singling out the locally
competent ones, as in the online local pool (OLP) technique, using the same
base-classifier model in uneven distributions may restrict the local level of
competence, since each region may have a data distribution that favors one
model over the others. Thus, we propose in this work a problem-independent
dynamic base-classifier model recommendation for the OLP technique, which uses
information regarding the behavior of a portfolio of models over the samples of
different problems to recommend one (or several) of them on a per-instance
manner. Our proposed framework builds a multi-label meta-classifier responsible
for recommending a set of relevant model types based on the local data
complexity of the region surrounding each test sample. The OLP technique then
produces a local pool with the model that yields the highest probability score
of the meta-classifier. Experimental results show that different data
distributions favored different model types on a local scope. Moreover, based
on the performance of an ideal model type selector, it was observed that there
is a clear advantage in choosing a relevant model type for each test instance.
Overall, the proposed model type recommender system yielded a statistically
similar performance to the original OLP with fixed base-classifier model. Given
the novelty of the approach and the gap in performance between the proposed
framework and the ideal selector, we regard this as a promising research
direction. Code available at
github.com/marianaasouza/dynamic-model-recommender.
Related papers
- Embedding-based statistical inference on generative models [10.948308354932639]
We extend results related to embedding-based representations of generative models to classical statistical inference settings.
We demonstrate that using the perspective space as the basis of a notion of "similar" is effective for multiple model-level inference tasks.
arXiv Detail & Related papers (2024-10-01T22:28:39Z) - Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - A Two-Phase Recall-and-Select Framework for Fast Model Selection [13.385915962994806]
We propose a two-phase (coarse-recall and fine-selection) model selection framework.
It aims to enhance the efficiency of selecting a robust model by leveraging the models' training performances on benchmark datasets.
It has been demonstrated that the proposed methodology facilitates the selection of a high-performing model at a rate about 3x times faster than conventional baseline methods.
arXiv Detail & Related papers (2024-03-28T14:44:44Z) - Universal Semi-supervised Model Adaptation via Collaborative Consistency
Training [92.52892510093037]
We introduce a realistic and challenging domain adaptation problem called Universal Semi-supervised Model Adaptation (USMA)
We propose a collaborative consistency training framework that regularizes the prediction consistency between two models.
Experimental results demonstrate the effectiveness of our method on several benchmark datasets.
arXiv Detail & Related papers (2023-07-07T08:19:40Z) - Clustering Indices based Automatic Classification Model Selection [16.096824533334352]
We propose a novel method for automatic classification model selection from a set of candidate model classes.
We compute the dataset clustering indices and directly predict the expected classification performance using the learned regressor.
We also propose an end-to-end Automated ML system for data classification based on our model selection method.
arXiv Detail & Related papers (2023-05-23T10:52:37Z) - Improving Heterogeneous Model Reuse by Density Estimation [105.97036205113258]
This paper studies multiparty learning, aiming to learn a model using the private data of different participants.
Model reuse is a promising solution for multiparty learning, assuming that a local model has been trained for each party.
arXiv Detail & Related papers (2023-05-23T09:46:54Z) - A Prototype-Oriented Clustering for Domain Shift with Source Privacy [66.67700676888629]
We introduce Prototype-oriented Clustering with Distillation (PCD) to improve the performance and applicability of existing methods.
PCD first constructs a source clustering model by aligning the distributions of prototypes and data.
It then distills the knowledge to the target model through cluster labels provided by the source model while simultaneously clustering the target data.
arXiv Detail & Related papers (2023-02-08T00:15:35Z) - PAMI: partition input and aggregate outputs for model interpretation [69.42924964776766]
In this study, a simple yet effective visualization framework called PAMI is proposed based on the observation that deep learning models often aggregate features from local regions for model predictions.
The basic idea is to mask majority of the input and use the corresponding model output as the relative contribution of the preserved input part to the original model prediction.
Extensive experiments on multiple tasks confirm the proposed method performs better than existing visualization approaches in more precisely finding class-specific input regions.
arXiv Detail & Related papers (2023-02-07T08:48:34Z) - A parallelizable model-based approach for marginal and multivariate
clustering [0.0]
This paper develops a clustering method that takes advantage of the sturdiness of model-based clustering.
We tackle this issue by specifying a finite mixture model per margin that allows each margin to have a different number of clusters.
The proposed approach is computationally appealing as well as more tractable for moderate to high dimensions than a full' (joint) model-based clustering approach.
arXiv Detail & Related papers (2022-12-07T23:54:41Z) - Locate This, Not That: Class-Conditioned Sound Event DOA Estimation [50.74947937253836]
We propose an alternative class-conditioned SELD model for situations where we may not be interested in all classes all of the time.
This class-conditioned SELD model takes as input the spatial and spectral features from the sound file, and also a one-hot vector indicating the class we are currently interested in localizing.
arXiv Detail & Related papers (2022-03-08T16:49:15Z) - Semi-nonparametric Latent Class Choice Model with a Flexible Class
Membership Component: A Mixture Model Approach [6.509758931804479]
The proposed model formulates the latent classes using mixture models as an alternative approach to the traditional random utility specification.
Results show that mixture models improve the overall performance of latent class choice models.
arXiv Detail & Related papers (2020-07-06T13:19:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.