Adiabatic Quantum Feature Selection for Sparse Linear Regression
- URL: http://arxiv.org/abs/2106.02357v1
- Date: Fri, 4 Jun 2021 09:14:01 GMT
- Title: Adiabatic Quantum Feature Selection for Sparse Linear Regression
- Authors: Surya Sai Teja Desu, P.K. Srijith, M.V. Panduranga Rao, Naveen
Sivadasan
- Abstract summary: We formulate and compare the quality of QUBO solution on synthetic and real world datasets.
The results demonstrate the effectiveness of the proposed adiabatic quantum computing approach in finding the optimal solution.
- Score: 0.17499351967216337
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Linear regression is a popular machine learning approach to learn and predict
real valued outputs or dependent variables from independent variables or
features. In many real world problems, its beneficial to perform sparse linear
regression to identify important features helpful in predicting the dependent
variable. It not only helps in getting interpretable results but also avoids
overfitting when the number of features is large, and the amount of data is
small. The most natural way to achieve this is by using `best subset selection'
which penalizes non-zero model parameters by adding $\ell_0$ norm over
parameters to the least squares loss. However, this makes the objective
function non-convex and intractable even for a small number of features. This
paper aims to address the intractability of sparse linear regression with
$\ell_0$ norm using adiabatic quantum computing, a quantum computing paradigm
that is particularly useful for solving optimization problems faster. We
formulate the $\ell_0$ optimization problem as a Quadratic Unconstrained Binary
Optimization (QUBO) problem and solve it using the D-Wave adiabatic quantum
computer. We study and compare the quality of QUBO solution on synthetic and
real world datasets. The results demonstrate the effectiveness of the proposed
adiabatic quantum computing approach in finding the optimal solution. The QUBO
solution matches the optimal solution for a wide range of sparsity penalty
values across the datasets.
Related papers
- Adaptive Learning for Quantum Linear Regression [10.445957451908695]
In a recent work, linear regression was formulated as a quadratic binary optimization problem.
This approach promises a computational time advantage for large datasets.
However, the quality of the solution is limited by the necessary use of a precision vector.
In this work, we focus on the practical challenge of improving the precision vector encoding.
arXiv Detail & Related papers (2024-08-05T21:09:01Z) - Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data [17.657917523817243]
We develop and analyze algorithms for instrumental variable regression by viewing the problem as a conditional optimization problem.
In the context of least-squares instrumental variable regression, our algorithms neither require matrix inversions nor mini-batches.
We derive rates of convergence in expectation, that are of order $mathcalO(log T/T)$ and $mathcalO (1/T1-iota)$ for any $iota>0$.
arXiv Detail & Related papers (2024-05-29T19:21:55Z) - BO4IO: A Bayesian optimization approach to inverse optimization with uncertainty quantification [5.031974232392534]
This work addresses data-driven inverse optimization (IO)
The goal is to estimate unknown parameters in an optimization model from observed decisions that can be assumed to be optimal or near-optimal.
arXiv Detail & Related papers (2024-05-28T06:52:17Z) - Optimal Bias-Correction and Valid Inference in High-Dimensional Ridge Regression: A Closed-Form Solution [0.0]
We introduce an iterative strategy to correct bias effectively when the dimension $p$ is less than the sample size $n$.
For $p>n$, our method optimally mitigates the bias such that any remaining bias in the proposed de-biased estimator is unattainable.
Our method offers a transformative solution to the bias challenge in ridge regression inferences across various disciplines.
arXiv Detail & Related papers (2024-05-01T10:05:19Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Nearly Minimax Optimal Reinforcement Learning for Linear Markov Decision
Processes [80.89852729380425]
We propose the first computationally efficient algorithm that achieves the nearly minimax optimal regret $tilde O(dsqrtH3K)$.
Our work provides a complete answer to optimal RL with linear MDPs, and the developed algorithm and theoretical tools may be of independent interest.
arXiv Detail & Related papers (2022-12-12T18:58:59Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Linear Regression by Quantum Amplitude Estimation and its Extension to
Convex Optimization [0.0]
We propose a new quantum algorithm for linear regression, which has the complexity of $O(epsilon-1)$ and keeps the logarithmic dependence on the number of data points $N_D$.
In this method, we overcome bottleneck parts in the calculation, which take the form of the sum over data points and therefore have the complexity proportional to $N_D$.
arXiv Detail & Related papers (2021-05-28T00:04:11Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.