Another generalization of Hadamard test: Optimal sample complexities for learning functions on the unitary group
- URL: http://arxiv.org/abs/2509.05710v2
- Date: Tue, 09 Sep 2025 04:58:35 GMT
- Title: Another generalization of Hadamard test: Optimal sample complexities for learning functions on the unitary group
- Authors: Daiki Suruga,
- Abstract summary: Estimating properties of unknown unitary operations is a fundamental task in quantum information science.<n>We present a unified framework for the sample-efficient estimation of arbitrary integrable functions.<n>Our technique generalizes the Hadamard test and leverage tools from representation theory, yielding both lower and upper bound on sample complexity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating properties of unknown unitary operations is a fundamental task in quantum information science. While full unitary tomography requires a number of samples to the unknown unitary scaling linearly with the dimension (implying exponentially with the number of qubits), estimating specific functions of a unitary can be significantly more efficient. In this paper, we present a unified framework for the sample-efficient estimation of arbitrary square integrable functions $f: \mathbf{U}(d) \to \mathbb{C}$, using only access to the controlled-unitary operation. We first provide a tight characterization of the optimal sample complexity when the accuracy is measured by the averaged bias over the unitary $\mathbf{U}(d)$. We then construct a sample-efficient estimation algorithm that becomes optimal under the Probably Approximately Correct (PAC) learning criterion for various classes of functions. Applications include optimal estimation of matrix elements of irreducible representations, the trace, determinant, and general polynomial functions on $\mathbf{U}(d)$. Our technique generalize the Hadamard test and leverage tools from representation theory, yielding both lower and upper bound on sample complexity.
Related papers
- The Sample Complexity of Learning Lipschitz Operators with respect to Gaussian Measures [1.037768322019687]
We study the approximation of Lipschitz operators with respect to Gaussian measures.<n>We study general reconstruction strategies of Lipschitz operators from $m$ arbitrary (potentially adaptive) linear samples.
arXiv Detail & Related papers (2024-10-30T20:32:30Z) - Global Optimization of Gaussian Process Acquisition Functions Using a Piecewise-Linear Kernel Approximation [5.23716716929969]
This paper investigates mixed-integer Kernel-MIQP as a paradigm for global acquisition function optimization.<n>We analyze the theoretical bounds of the proposed approximation and empirically demonstrate the framework.
arXiv Detail & Related papers (2024-10-22T10:56:52Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - On Computationally Efficient Learning of Exponential Family
Distributions [33.229944519289795]
We focus on the setting where the support as well as the natural parameters are appropriately bounded.
Our method achives the order-optimal sample complexity of $O(sf log(k)/alpha2)$ when tailored for node-wise-sparse random fields.
arXiv Detail & Related papers (2023-09-12T17:25:32Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Large Dimensional Independent Component Analysis: Statistical Optimality
and Computational Tractability [13.104413212606577]
We investigate the optimal statistical performance and the impact of computational constraints for independent component analysis (ICA)
We show that the optimal sample complexity is linear in dimensionality.
We develop computationally tractable estimates that attain both the optimal sample complexity and minimax optimal rates of convergence.
arXiv Detail & Related papers (2023-03-31T15:46:30Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - The Hypervolume Indicator Hessian Matrix: Analytical Expression,
Computational Time Complexity, and Sparsity [4.523133864190258]
This paper establishes the analytical expression of the Hessian matrix of the mapping from a (fixed size) collection of $n$ points in the $d$-dimensional decision space to the scalar hypervolume indicator value.
The Hessian matrix plays a crucial role in second-order methods, such as the Newton-Raphson optimization method, and it can be used for the verification of local optimal sets.
arXiv Detail & Related papers (2022-11-08T11:24:18Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Quantum Goemans-Williamson Algorithm with the Hadamard Test and
Approximate Amplitude Constraints [62.72309460291971]
We introduce a variational quantum algorithm for Goemans-Williamson algorithm that uses only $n+1$ qubits.
Efficient optimization is achieved by encoding the objective matrix as a properly parameterized unitary conditioned on an auxilary qubit.
We demonstrate the effectiveness of our protocol by devising an efficient quantum implementation of the Goemans-Williamson algorithm for various NP-hard problems.
arXiv Detail & Related papers (2022-06-30T03:15:23Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.