More-efficient Quantum Multivariate Mean Value Estimator from Generalized Grover Gate
- URL: http://arxiv.org/abs/2504.06940v2
- Date: Thu, 10 Apr 2025 19:50:10 GMT
- Title: More-efficient Quantum Multivariate Mean Value Estimator from Generalized Grover Gate
- Authors: Letian Tang,
- Abstract summary: We find an algorithm that uses $Oleft(n log fracddeltaright)$ samples to find a mean estimate that $vectildemu$.<n>Our result remains not exactly optimal due to a $log fracddelta$ term in our complexity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we present an efficient algorithm for multivariate mean value estimation. Our algorithm outperforms previous work by polylog factors and nearly saturates the known lower bound. More formally, given a random vector $\vec{X}$ of dimension $d$, we find an algorithm that uses $O\left(n \log \frac{d}{\delta}\right)$ samples to find a mean estimate that $\vec{\tilde{\mu}}$ that differs from the true mean $\vec{\mu}$ by $\frac{\sqrt{\text{tr } \Sigma}}{n}$ in $\ell^\infty$ norm and hence $\frac{\sqrt{d \text{ tr } \Sigma}}{n}$ in $\ell^2$ norm, where $\Sigma$ is the covariance matrix of the components of the random vector. We also presented another algorithm that uses smaller memory but costs an extra $d^\frac{1}{4}$ in complexity. Consider the Grover gate, the unitary operator used in Grover's algorithm. It contains an oracle that uses a $\pm 1$ phase for each candidate for the search space. Previous work has demonstrated that when we substitute the oracle in Grover gate with generic phases, it ended up being a good mean value estimator in some mathematical notion. We used this idea to build our algorithm. Our result remains not exactly optimal due to a $\log \frac{d}{\delta}$ term in our complexity, as opposed to something nicer such as $\log \frac{1}{\delta}$; This comes from the phase estimation primitive in our algorithm. So far, this primitive is the only major known method to tackle the problem, and moving beyond this idea seems hard. Our results demonstrates that the methodology with generalized Grover gate can be used develop the optimal algorithm without polylog overhead for different tasks relating to mean value estimation.
Related papers
- Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - A Sub-Quadratic Time Algorithm for Robust Sparse Mean Estimation [6.853165736531941]
We study the algorithmic problem of sparse mean estimation in the presence of adversarial outliers.
Our main contribution is an algorithm for robust sparse mean estimation which runs in emphsubquadratic time using $mathrmpoly(k,log d,1/epsilon)$ samples.
arXiv Detail & Related papers (2024-03-07T18:23:51Z) - Oja's Algorithm for Streaming Sparse PCA [7.059472280274011]
Oja's algorithm for Streaming Principal Component Analysis (PCA) for $n$ data-points in a $d$ dimensional space achieves the same sin-squared error $O(r_mathsfeff/n)$ as the offline algorithm in $O(d)$ space and $O(nd)$ time.
We show that a simple single-pass procedure that thresholds the output of Oja's algorithm can achieve the minimax error bound under some regularity conditions in $O(d)$ space and $O(nd)$ time.
arXiv Detail & Related papers (2024-02-11T16:36:48Z) - A Scalable Algorithm for Individually Fair K-means Clustering [77.93955971520549]
We present a scalable algorithm for the individually fair ($p$, $k$)-clustering problem introduced by Jung et al. and Mahabadi et al.
A clustering is then called individually fair if it has centers within distance $delta(x)$ of $x$ for each $xin P$.
We show empirically that not only is our algorithm much faster than prior work, but it also produces lower-cost solutions.
arXiv Detail & Related papers (2024-02-09T19:01:48Z) - Solving Dense Linear Systems Faster Than via Preconditioning [1.8854491183340518]
We show that our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
In particular, our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
Our main algorithm can be viewed as a randomized block coordinate descent method.
arXiv Detail & Related papers (2023-12-14T12:53:34Z) - Online Robust Mean Estimation [37.746091744197656]
We study the problem of high-dimensional robust mean estimation in an online setting.
We prove two main results about online robust mean estimation in this model.
arXiv Detail & Related papers (2023-10-24T15:28:43Z) - Weak Schur sampling with logarithmic quantum memory [0.0]
We introduce a new algorithm for the task of weak Schur sampling.
Our algorithm efficiently determines both the Young label which indexes the irreducible representations and the multiplicity label of the symmetric group.
arXiv Detail & Related papers (2023-09-21T10:02:46Z) - Do you know what q-means? [45.810803542748495]
We present an improved version of the quantum algorithm originally proposed by Kerenidis, Landman, Luongo, and Prakash (NeurIPS')<n>Our algorithm does not rely on quantum linear algebra primitives of prior work, but instead only uses QRAM to prepare simple states.<n>We also present a "dequantized" algorithm for $varepsilon$-$k$-means which runs in $Obig.
arXiv Detail & Related papers (2023-08-18T17:52:12Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Mean estimation when you have the source code; or, quantum Monte Carlo
methods [2.9697051524971743]
We give a quantum procedure that runs the code $O(n)$ times and returns an estimate $widehatboldsymbolmu$ for $mu.
This dependence on $n$ is optimal for quantum algorithms.
arXiv Detail & Related papers (2022-08-16T05:34:26Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - On Efficient Low Distortion Ultrametric Embedding [18.227854382422112]
A widely-used method to preserve the underlying hierarchical structure of the data is to find an embedding of the data into a tree or an ultrametric.
In this paper, we provide a new algorithm which takes as input a set of points isometric in $mathbbRd2 (for universal constant $rho>1$) to output an ultrametric $Delta.
We show that the output of our algorithm is comparable to the output of the linkage algorithms while achieving a much faster running time.
arXiv Detail & Related papers (2020-08-15T11:06:45Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.