Conformalization of Sparse Generalized Linear Models
- URL: http://arxiv.org/abs/2307.05109v1
- Date: Tue, 11 Jul 2023 08:36:12 GMT
- Title: Conformalization of Sparse Generalized Linear Models
- Authors: Etash Kumar Guha and Eugene Ndiaye and Xiaoming Huo
- Abstract summary: Conformal prediction method estimates a confidence set for $y_n+1$ that is valid for any finite sample size.
Although attractive, computing such a set is computationally infeasible in most regression problems.
We show how our path-following algorithm accurately approximates conformal prediction sets.
- Score: 2.1485350418225244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a sequence of observable variables $\{(x_1, y_1), \ldots, (x_n,
y_n)\}$, the conformal prediction method estimates a confidence set for
$y_{n+1}$ given $x_{n+1}$ that is valid for any finite sample size by merely
assuming that the joint distribution of the data is permutation invariant.
Although attractive, computing such a set is computationally infeasible in most
regression problems. Indeed, in these cases, the unknown variable $y_{n+1}$ can
take an infinite number of possible candidate values, and generating conformal
sets requires retraining a predictive model for each candidate. In this paper,
we focus on a sparse linear model with only a subset of variables for
prediction and use numerical continuation techniques to approximate the
solution path efficiently. The critical property we exploit is that the set of
selected variables is invariant under a small perturbation of the input data.
Therefore, it is sufficient to enumerate and refit the model only at the change
points of the set of active features and smoothly interpolate the rest of the
solution via a Predictor-Corrector mechanism. We show how our path-following
algorithm accurately approximates conformal prediction sets and illustrate its
performance using synthetic and real data examples.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data [17.657917523817243]
We develop and analyze algorithms for instrumental variable regression by viewing the problem as a conditional optimization problem.
In the context of least-squares instrumental variable regression, our algorithms neither require matrix inversions nor mini-batches.
We derive rates of convergence in expectation, that are of order $mathcalO(log T/T)$ and $mathcalO (1/T1-iota)$ for any $iota>0$.
arXiv Detail & Related papers (2024-05-29T19:21:55Z) - Sparse Gaussian Graphical Models with Discrete Optimization:
Computational and Statistical Perspectives [8.403841349300103]
We consider the problem of learning a sparse graph underlying an undirected Gaussian graphical model.
We propose GraphL0BnB, a new estimator based on an $ell_0$-penalized version of the pseudolikelihood function.
Our numerical experiments on real/synthetic datasets suggest that our method can solve, to near-optimality, problem instances with $p = 104$.
arXiv Detail & Related papers (2023-07-18T15:49:02Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Stable Conformal Prediction Sets [0.0]
conformal prediction is a methodology that allows to estimate a confidence set for $y_n+1$ given $x_n+1$.
While appealing, the computation of such set turns out to be infeasible in general.
We combine conformal prediction techniques with algorithmic stability bounds to derive a prediction set computable with a single model fit.
arXiv Detail & Related papers (2021-12-19T18:53:32Z) - Test Set Sizing Via Random Matrix Theory [91.3755431537592]
This paper uses techniques from Random Matrix Theory to find the ideal training-testing data split for a simple linear regression.
It defines "ideal" as satisfying the integrity metric, i.e. the empirical model error is the actual measurement noise.
This paper is the first to solve for the training and test size for any model in a way that is truly optimal.
arXiv Detail & Related papers (2021-12-11T13:18:33Z) - Efficient and robust high-dimensional sparse logistic regression via
nonlinear primal-dual hybrid gradient algorithms [0.0]
We propose an iterative algorithm that provably computes a solution to a logistic regression problem regularized by an elastic net penalty.
This result improves on the known complexity bound of $O(min(m2n,mn2)log (1/epsilon))$ for first-order optimization methods.
arXiv Detail & Related papers (2021-11-30T14:16:48Z) - Universal and data-adaptive algorithms for model selection in linear
contextual bandits [52.47796554359261]
We consider the simplest non-trivial instance of model-selection: distinguishing a simple multi-armed bandit problem from a linear contextual bandit problem.
We introduce new algorithms that explore in a data-adaptive manner and provide guarantees of the form $mathcalO(dalpha T1- alpha)$.
Our approach extends to model selection among nested linear contextual bandits under some additional assumptions.
arXiv Detail & Related papers (2021-11-08T18:05:35Z) - $\Delta$-UQ: Accurate Uncertainty Quantification via Anchor
Marginalization [40.581619201120716]
We present $Delta$UQ -- a novel, general-purpose uncertainty estimator using the concept of anchoring in predictive models.
We find this uncertainty is deeply connected to improper sampling of the input data, and inherent noise, enabling us to estimate the total uncertainty in any system.
arXiv Detail & Related papers (2021-10-05T17:44:31Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Conditional Uncorrelation and Efficient Non-approximate Subset Selection
in Sparse Regression [72.84177488527398]
We consider sparse regression from the view of correlation, and propose the formula of conditional uncorrelation.
By the proposed method, the computational complexity is reduced from $O(frac16k3+mk2+mkd)$ to $O(frac16k3+frac12mk2)$ for each candidate subset in sparse regression.
arXiv Detail & Related papers (2020-09-08T20:32:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.