Grouped Variable Selection with Discrete Optimization: Computational and
Statistical Perspectives
- URL: http://arxiv.org/abs/2104.07084v1
- Date: Wed, 14 Apr 2021 19:21:59 GMT
- Title: Grouped Variable Selection with Discrete Optimization: Computational and
Statistical Perspectives
- Authors: Hussein Hazimeh, Rahul Mazumder, Peter Radchenko
- Abstract summary: We present a new algorithmic framework for grouped variable selection that is based on discrete mathematical optimization.
Our methodology covers both high-dimensional linear regression and non- additive modeling with sparse programming.
Our exact algorithm is based on a standalone branch-and-bound (BnB) framework, which can solve the associated mixed integer (MIP) problem to certified optimality.
- Score: 9.593208961737572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new algorithmic framework for grouped variable selection that is
based on discrete mathematical optimization. While there exist several
appealing approaches based on convex relaxations and nonconvex heuristics, we
focus on optimal solutions for the $\ell_0$-regularized formulation, a problem
that is relatively unexplored due to computational challenges. Our methodology
covers both high-dimensional linear regression and nonparametric sparse
additive modeling with smooth components. Our algorithmic framework consists of
approximate and exact algorithms. The approximate algorithms are based on
coordinate descent and local search, with runtimes comparable to popular sparse
learning algorithms. Our exact algorithm is based on a standalone
branch-and-bound (BnB) framework, which can solve the associated mixed integer
programming (MIP) problem to certified optimality. By exploiting the problem
structure, our custom BnB algorithm can solve to optimality problem instances
with $5 \times 10^6$ features in minutes to hours -- over $1000$ times larger
than what is currently possible using state-of-the-art commercial MIP solvers.
We also explore statistical properties of the $\ell_0$-based estimators. We
demonstrate, theoretically and empirically, that our proposed estimators have
an edge over popular group-sparse estimators in terms of statistical
performance in various regimes.
Related papers
- Approximation Algorithms for Combinatorial Optimization with Predictions [3.7235228254732524]
We use predictions to improve over approximation guarantees of classic algorithms.
Our algorithms produce optimal solutions when provided with perfect predictions.
Although we show our approach to be optimal for this class of problems as a whole, there is a potential for exploiting specific structural properties of individual problems to obtain improved bounds.
arXiv Detail & Related papers (2024-11-25T17:31:34Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - A unified consensus-based parallel ADMM algorithm for high-dimensional
regression with combined regularizations [3.280169909938912]
parallel alternating multipliers (ADMM) is widely recognized for its effectiveness in handling large-scale distributed datasets.
The proposed algorithms serve to demonstrate the reliability, stability, and scalability of a financial example.
arXiv Detail & Related papers (2023-11-21T03:30:38Z) - Fast Optimal Locally Private Mean Estimation via Random Projections [58.603579803010796]
We study the problem of locally private mean estimation of high-dimensional vectors in the Euclidean ball.
We propose a new algorithmic framework, ProjUnit, for private mean estimation.
Our framework is deceptively simple: each randomizer projects its input to a random low-dimensional subspace, normalizes the result, and then runs an optimal algorithm.
arXiv Detail & Related papers (2023-06-07T14:07:35Z) - Portfolio optimization with discrete simulated annealing [0.0]
We present an integer optimization method to find optimal portfolios in the presence of discretized convex and non- convex cost functions.
This allows us to achieve a solution with a given quality.
arXiv Detail & Related papers (2022-10-03T10:39:05Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Alternating Direction Method of Multipliers for Quantization [15.62692130672419]
We study the performance of the Alternating Direction Method of Multipliers for Quantization ($texttADMM-Q$) algorithm.
We develop a few variants of $texttADMM-Q$ that can handle inexact update rules.
We empirically evaluate the efficacy of our proposed approaches.
arXiv Detail & Related papers (2020-09-08T01:58:02Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Learning Sparse Classifiers: Continuous and Mixed Integer Optimization
Perspectives [10.291482850329892]
Mixed integer programming (MIP) can be used to solve (to optimality) $ell_0$-regularized regression problems.
We propose two classes of scalable algorithms: an exact algorithm that can handlepapprox 50,000$ features in a few minutes, and approximate algorithms that can address instances with $papprox6$.
In addition, we present new estimation error bounds for $ell$-regularizeds.
arXiv Detail & Related papers (2020-01-17T18:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.