Improving Group Lasso for high-dimensional categorical data
- URL: http://arxiv.org/abs/2210.14021v2
- Date: Thu, 27 Oct 2022 05:28:33 GMT
- Title: Improving Group Lasso for high-dimensional categorical data
- Authors: Szymon Nowakowski, Piotr Pokarowski, Wojciech Rejchel
- Abstract summary: Group Lasso is a well known efficient algorithm for selection continuous or categorical variables.
We propose a two-step procedure to obtain a sparse solution of the Group Lasso.
We show that our method performs better than the state of the art algorithms with respect to the prediction accuracy or model dimension.
- Score: 0.90238471756546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse modelling or model selection with categorical data is challenging even
for a moderate number of variables, because one parameter is roughly needed to
encode one category or level. The Group Lasso is a well known efficient
algorithm for selection continuous or categorical variables, but all estimates
related to a selected factor usually differ. Therefore, a fitted model may not
be sparse, which makes the model interpretation difficult. To obtain a sparse
solution of the Group Lasso we propose the following two-step procedure: first,
we reduce data dimensionality using the Group Lasso; then to choose the final
model we use an information criterion on a small family of models prepared by
clustering levels of individual factors. We investigate selection correctness
of the algorithm in a sparse high-dimensional scenario. We also test our method
on synthetic as well as real datasets and show that it performs better than the
state of the art algorithms with respect to the prediction accuracy or model
dimension.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Random Models for Fuzzy Clustering Similarity Measures [0.0]
The Adjusted Rand Index (ARI) is a widely used method for comparing hard clusterings.
We propose a single framework for computing the ARI with three random models that are intuitive and explainable for both hard and fuzzy clusterings.
arXiv Detail & Related papers (2023-12-16T00:07:04Z) - A Consistent and Scalable Algorithm for Best Subset Selection in Single
Index Models [1.3236116985407258]
Best subset selection in high-dimensional models is known to be computationally intractable.
We propose the first provably scalable algorithm for best subset selection in high-dimensional SIMs.
Our algorithm enjoys the subset selection consistency and has the oracle property with a high probability.
arXiv Detail & Related papers (2023-09-12T13:48:06Z) - Best-Subset Selection in Generalized Linear Models: A Fast and
Consistent Algorithm via Splicing Technique [0.6338047104436422]
Best subset section has been widely regarded as the Holy Grail of problems of this type.
We proposed and illustrated an algorithm for best subset recovery in mild conditions.
Our implementation achieves approximately a fourfold speedup compared to popular variable selection toolkits.
arXiv Detail & Related papers (2023-08-01T03:11:31Z) - A model-free feature selection technique of feature screening and random
forest based recursive feature elimination [0.0]
We propose a model-free feature selection method for ultra-high dimensional data with mass features.
We show that the proposed method is selection consistent and $L$ consistent under weak regularity conditions.
arXiv Detail & Related papers (2023-02-15T03:39:16Z) - Composite Feature Selection using Deep Ensembles [130.72015919510605]
We investigate the problem of discovering groups of predictive features without predefined grouping.
We introduce a novel deep learning architecture that uses an ensemble of feature selection models to find predictive groups.
We propose a new metric to measure similarity between discovered groups and the ground truth.
arXiv Detail & Related papers (2022-11-01T17:49:40Z) - Which is the best model for my data? [0.0]
The proposed meta-learning approach relies on machine learning and involves four major steps.
We present a collection of 62 meta-features that address the problem of information cancellation when aggregation measure values involving positive and negative measurements.
We show that our meta-learning approach can correctly predict an optimal model for 91% of the synthetic datasets and for 87% of the real-world datasets.
arXiv Detail & Related papers (2022-10-26T13:15:43Z) - A linearized framework and a new benchmark for model selection for
fine-tuning [112.20527122513668]
Fine-tuning from a collection of models pre-trained on different domains is emerging as a technique to improve test accuracy in the low-data regime.
We introduce two new baselines for model selection -- Label-Gradient and Label-Feature Correlation.
Our benchmark highlights accuracy gain with model zoo compared to fine-tuning Imagenet models.
arXiv Detail & Related papers (2021-01-29T21:57:15Z) - Generalized Matrix Factorization: efficient algorithms for fitting
generalized linear latent variable models to large data arrays [62.997667081978825]
Generalized Linear Latent Variable models (GLLVMs) generalize such factor models to non-Gaussian responses.
Current algorithms for estimating model parameters in GLLVMs require intensive computation and do not scale to large datasets.
We propose a new approach for fitting GLLVMs to high-dimensional datasets, based on approximating the model using penalized quasi-likelihood.
arXiv Detail & Related papers (2020-10-06T04:28:19Z) - Set Based Stochastic Subsampling [85.5331107565578]
We propose a set-based two-stage end-to-end neural subsampling model that is jointly optimized with an textitarbitrary downstream task network.
We show that it outperforms the relevant baselines under low subsampling rates on a variety of tasks including image classification, image reconstruction, function reconstruction and few-shot classification.
arXiv Detail & Related papers (2020-06-25T07:36:47Z) - Optimal Clustering from Noisy Binary Feedback [75.17453757892152]
We study the problem of clustering a set of items from binary user feedback.
We devise an algorithm with a minimal cluster recovery error rate.
For adaptive selection, we develop an algorithm inspired by the derivation of the information-theoretical error lower bounds.
arXiv Detail & Related papers (2019-10-14T09:18:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.