Generalized Information Criteria for Structured Sparse Models
- URL: http://arxiv.org/abs/2309.01764v1
- Date: Mon, 4 Sep 2023 18:50:13 GMT
- Title: Generalized Information Criteria for Structured Sparse Models
- Authors: Eduardo F. Mendes and Gabriel J. P. Pinto
- Abstract summary: We propose a new Generalized Information Criteria (GIC) that takes into consideration the sparsity pattern one wishes to recover.
We show that the GIC can also be used for selecting the regularization parameter within a regularized $m$-estimation framework.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regularized m-estimators are widely used due to their ability of recovering a
low-dimensional model in high-dimensional scenarios. Some recent efforts on
this subject focused on creating a unified framework for establishing oracle
bounds, and deriving conditions for support recovery. Under this same
framework, we propose a new Generalized Information Criteria (GIC) that takes
into consideration the sparsity pattern one wishes to recover. We obtain
non-asymptotic model selection bounds and sufficient conditions for model
selection consistency of the GIC. Furthermore, we show that the GIC can also be
used for selecting the regularization parameter within a regularized
$m$-estimation framework, which allows practical use of the GIC for model
selection in high-dimensional scenarios. We provide examples of group LASSO in
the context of generalized linear regression and low rank matrix regression.
Related papers
- Empirical Bayes Estimation for Lasso-Type Regularizers: Analysis of Automatic Relevance Determination [0.21485350418225244]
This paper focuses on linear regression models with non-conjugate sparsity-inducing regularizers such as lasso and group lasso.
We derive the empirical Bayes estimators for the group lasso regularized linear regression models with a limited number of parameters.
arXiv Detail & Related papers (2025-01-20T05:25:51Z) - Optimizing Sequential Recommendation Models with Scaling Laws and Approximate Entropy [104.48511402784763]
Performance Law for SR models aims to theoretically investigate and model the relationship between model performance and data quality.
We propose Approximate Entropy (ApEn) to assess data quality, presenting a more nuanced approach compared to traditional data quantity metrics.
arXiv Detail & Related papers (2024-11-30T10:56:30Z) - PerturBench: Benchmarking Machine Learning Models for Cellular Perturbation Analysis [14.526536510805755]
We present a comprehensive framework for predicting the effects of perturbations in single cells, designed to standardize benchmarking in this rapidly evolving field.
Our framework, PerturBench, includes a user-friendly platform, diverse datasets, metrics for fair model comparison, and detailed performance analysis.
arXiv Detail & Related papers (2024-08-20T07:40:20Z) - LLM4Rerank: LLM-based Auto-Reranking Framework for Recommendations [51.76373105981212]
Reranking is a critical component in recommender systems, playing an essential role in refining the output of recommendation algorithms.
We introduce a comprehensive reranking framework, designed to seamlessly integrate various reranking criteria.
A customizable input mechanism is also integrated, enabling the tuning of the language model's focus to meet specific reranking needs.
arXiv Detail & Related papers (2024-06-18T09:29:18Z) - GenBench: A Benchmarking Suite for Systematic Evaluation of Genomic Foundation Models [56.63218531256961]
We introduce GenBench, a benchmarking suite specifically tailored for evaluating the efficacy of Genomic Foundation Models.
GenBench offers a modular and expandable framework that encapsulates a variety of state-of-the-art methodologies.
We provide a nuanced analysis of the interplay between model architecture and dataset characteristics on task-specific performance.
arXiv Detail & Related papers (2024-06-01T08:01:05Z) - Automated Model Selection for Generalized Linear Models [0.0]
We show how mixed-integer conic optimization can be used to combine feature subset selection with holistic generalized linear models.
We propose a novel pairwise correlation constraint that combines the sign coherence constraint with ideas from classical statistical models.
arXiv Detail & Related papers (2024-04-25T12:16:58Z) - The Interpolating Information Criterion for Overparameterized Models [49.283527214211446]
We show that the Interpolating Information Criterion is a measure of model quality that naturally incorporates the choice of prior into the model selection.
Our new information criterion accounts for prior misspecification, geometric and spectral properties of the model, and is numerically consistent with known empirical and theoretical behavior.
arXiv Detail & Related papers (2023-07-15T12:09:54Z) - A Unified Framework for Estimation of High-dimensional Conditional
Factor Models [0.0]
This paper develops a general framework for estimation of high-dimensional conditional factor models via nuclear norm regularization.
We establish large sample properties of the estimators, and provide an efficient computing algorithm for finding the estimators.
We apply the method to analyze the cross section of individual US stock returns, and find that imposing homogeneity may improve the model's out-of-sample predictability.
arXiv Detail & Related papers (2022-09-01T12:10:29Z) - Posterior Differential Regularization with f-divergence for Improving
Model Robustness [95.05725916287376]
We focus on methods that regularize the model posterior difference between clean and noisy inputs.
We generalize the posterior differential regularization to the family of $f$-divergences.
Our experiments show that regularizing the posterior differential with $f$-divergence can result in well-improved model robustness.
arXiv Detail & Related papers (2020-10-23T19:58:01Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.