Generalized Information Criteria for Structured Sparse Models
- URL: http://arxiv.org/abs/2309.01764v1
- Date: Mon, 4 Sep 2023 18:50:13 GMT
- Title: Generalized Information Criteria for Structured Sparse Models
- Authors: Eduardo F. Mendes and Gabriel J. P. Pinto
- Abstract summary: We propose a new Generalized Information Criteria (GIC) that takes into consideration the sparsity pattern one wishes to recover.
We show that the GIC can also be used for selecting the regularization parameter within a regularized $m$-estimation framework.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regularized m-estimators are widely used due to their ability of recovering a
low-dimensional model in high-dimensional scenarios. Some recent efforts on
this subject focused on creating a unified framework for establishing oracle
bounds, and deriving conditions for support recovery. Under this same
framework, we propose a new Generalized Information Criteria (GIC) that takes
into consideration the sparsity pattern one wishes to recover. We obtain
non-asymptotic model selection bounds and sufficient conditions for model
selection consistency of the GIC. Furthermore, we show that the GIC can also be
used for selecting the regularization parameter within a regularized
$m$-estimation framework, which allows practical use of the GIC for model
selection in high-dimensional scenarios. We provide examples of group LASSO in
the context of generalized linear regression and low rank matrix regression.
Related papers
- Lexicographic optimization-based approaches to learning a representative model for multi-criteria sorting with non-monotonic criteria [5.374419989598479]
This paper proposes some approaches to learning a representative model for MCS problems with non-monotonic criteria.
We first define some transformation functions to map the marginal values and category thresholds into a UTA-like functional space.
We then construct constraint sets to model non-monotonic criteria in MCS problems and develop optimization models to check and rectify the inconsistency of the decision maker's assignment example preference information.
arXiv Detail & Related papers (2024-09-03T05:29:05Z) - PerturBench: Benchmarking Machine Learning Models for Cellular Perturbation Analysis [14.526536510805755]
We present a comprehensive framework for predicting the effects of perturbations in single cells, designed to standardize benchmarking in this rapidly evolving field.
Our framework, PerturBench, includes a user-friendly platform, diverse datasets, metrics for fair model comparison, and detailed performance analysis.
arXiv Detail & Related papers (2024-08-20T07:40:20Z) - LLM-enhanced Reranking in Recommender Systems [49.969932092129305]
Reranking is a critical component in recommender systems, playing an essential role in refining the output of recommendation algorithms.
We introduce a comprehensive reranking framework, designed to seamlessly integrate various reranking criteria.
A customizable input mechanism is also integrated, enabling the tuning of the language model's focus to meet specific reranking needs.
arXiv Detail & Related papers (2024-06-18T09:29:18Z) - GenBench: A Benchmarking Suite for Systematic Evaluation of Genomic Foundation Models [56.63218531256961]
We introduce GenBench, a benchmarking suite specifically tailored for evaluating the efficacy of Genomic Foundation Models.
GenBench offers a modular and expandable framework that encapsulates a variety of state-of-the-art methodologies.
We provide a nuanced analysis of the interplay between model architecture and dataset characteristics on task-specific performance.
arXiv Detail & Related papers (2024-06-01T08:01:05Z) - Automated Model Selection for Generalized Linear Models [0.0]
We show how mixed-integer conic optimization can be used to combine feature subset selection with holistic generalized linear models.
We propose a novel pairwise correlation constraint that combines the sign coherence constraint with ideas from classical statistical models.
arXiv Detail & Related papers (2024-04-25T12:16:58Z) - The Interpolating Information Criterion for Overparameterized Models [49.283527214211446]
We show that the Interpolating Information Criterion is a measure of model quality that naturally incorporates the choice of prior into the model selection.
Our new information criterion accounts for prior misspecification, geometric and spectral properties of the model, and is numerically consistent with known empirical and theoretical behavior.
arXiv Detail & Related papers (2023-07-15T12:09:54Z) - A Unified Framework for Estimation of High-dimensional Conditional
Factor Models [0.0]
This paper develops a general framework for estimation of high-dimensional conditional factor models via nuclear norm regularization.
We establish large sample properties of the estimators, and provide an efficient computing algorithm for finding the estimators.
We apply the method to analyze the cross section of individual US stock returns, and find that imposing homogeneity may improve the model's out-of-sample predictability.
arXiv Detail & Related papers (2022-09-01T12:10:29Z) - Revisiting GANs by Best-Response Constraint: Perspective, Methodology,
and Application [49.66088514485446]
Best-Response Constraint (BRC) is a general learning framework to explicitly formulate the potential dependency of the generator on the discriminator.
We show that even with different motivations and formulations, a variety of existing GANs ALL can be uniformly improved by our flexible BRC methodology.
arXiv Detail & Related papers (2022-05-20T12:42:41Z) - Posterior Differential Regularization with f-divergence for Improving
Model Robustness [95.05725916287376]
We focus on methods that regularize the model posterior difference between clean and noisy inputs.
We generalize the posterior differential regularization to the family of $f$-divergences.
Our experiments show that regularizing the posterior differential with $f$-divergence can result in well-improved model robustness.
arXiv Detail & Related papers (2020-10-23T19:58:01Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.