Quantum-Inspired Classical Algorithm for Principal Component Regression
- URL: http://arxiv.org/abs/2010.08626v1
- Date: Fri, 16 Oct 2020 20:50:48 GMT
- Title: Quantum-Inspired Classical Algorithm for Principal Component Regression
- Authors: Daniel Chen, Yekun Xu, Betis Baheri, Chuan Bi, Ying Mao, Qiang Quan,
Shuai Xu
- Abstract summary: We develop an algorithm for principal component regression that runs in time polylogarithmic to the number of data points.
This exponential speed up allows for potential applications in much larger data sets.
- Score: 1.9105479266011323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a sublinear classical algorithm for principal component
regression. The algorithm uses quantum-inspired linear algebra, an idea
developed by Tang. Using this technique, her algorithm for recommendation
systems achieved runtime only polynomially slower than its quantum counterpart.
Her work was quickly adapted to solve many other problems in sublinear time
complexity. In this work, we developed an algorithm for principal component
regression that runs in time polylogarithmic to the number of data points, an
exponential speed up over the state-of-the-art algorithm, under the mild
assumption that the input is given in some data structure that supports a
norm-based sampling procedure. This exponential speed up allows for potential
applications in much larger data sets.
Related papers
- An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Quantum speedup of leverage score sampling and its application [0.0]
In this paper, we propose a quantum algorithm to accelerate the computation of leverage scores.
As an application, we propose a new quantum algorithm for rigid regression problems with vector solution outputs.
arXiv Detail & Related papers (2023-01-15T14:40:18Z) - Simulating Markovian open quantum systems using higher-order series
expansion [1.713291434132985]
We present an efficient quantum algorithm for simulating the dynamics of Markovian open quantum systems.
Our algorithm is conceptually cleaner, and it only uses simple quantum primitives without compressed encoding.
arXiv Detail & Related papers (2022-12-05T06:02:50Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Quantum Algorithm For Estimating Eigenvalue [0.0]
We provide a quantum algorithm for estimating the largest eigenvalue in magnitude of a given Hermitian matrix.
Our quantum procedure can also yield exponential speedup compared to classical algorithms that solve the same problem.
arXiv Detail & Related papers (2022-11-11T13:02:07Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra [53.46106569419296]
We create classical (non-quantum) dynamic data structures supporting queries for recommender systems and least-squares regression.
We argue that the previous quantum-inspired algorithms for these problems are doing leverage or ridge-leverage score sampling in disguise.
arXiv Detail & Related papers (2020-11-09T01:13:07Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Towards quantum advantage via topological data analysis [0.0]
We study the quantum-algorithmic methods behind the algorithm for topological data analysis of Lloyd, Garnerone and Zanardi.
We provide a number of new quantum algorithms for problems such as rank estimation and complex network analysis.
arXiv Detail & Related papers (2020-05-06T06:31:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.