Instance-Optimal Matrix Multiplicative Weight Update and Its Quantum Applications
- URL: http://arxiv.org/abs/2509.08911v1
- Date: Wed, 10 Sep 2025 18:15:41 GMT
- Title: Instance-Optimal Matrix Multiplicative Weight Update and Its Quantum Applications
- Authors: Weiyuan Gong, Tongyang Li, Xinzhao Wang, Zhiyu Zhang,
- Abstract summary: We present an improved algorithm achieving the instance-optimal regret bound of $O(sqrtTcdot S(X||d-1I_d))$.<n>We show that it outperforms the state of the art for learning quantum states corrupted by depolarization noise, random quantum states, and Gibbs states.
- Score: 18.700744251450313
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Matrix Multiplicative Weight Update (MMWU) is a seminal online learning algorithm with numerous applications. Applied to the matrix version of the Learning from Expert Advice (LEA) problem on the $d$-dimensional spectraplex, it is well known that MMWU achieves the minimax-optimal regret bound of $O(\sqrt{T\log d})$, where $T$ is the time horizon. In this paper, we present an improved algorithm achieving the instance-optimal regret bound of $O(\sqrt{T\cdot S(X||d^{-1}I_d)})$, where $X$ is the comparator in the regret, $I_d$ is the identity matrix, and $S(\cdot||\cdot)$ denotes the quantum relative entropy. Furthermore, our algorithm has the same computational complexity as MMWU, indicating that the improvement in the regret bound is ``free''. Technically, we first develop a general potential-based framework for matrix LEA, with MMWU being its special case induced by the standard exponential potential. Then, the crux of our analysis is a new ``one-sided'' Jensen's trace inequality built on a Laplace transform technique, which allows the application of general potential functions beyond exponential to matrix LEA. Our algorithm is finally induced by an optimal potential function from the vector LEA problem, based on the imaginary error function. Complementing the above, we provide a memory lower bound for matrix LEA, and explore the applications of our algorithm in quantum learning theory. We show that it outperforms the state of the art for learning quantum states corrupted by depolarization noise, random quantum states, and Gibbs states. In addition, applying our algorithm to linearized convex losses enables predicting nonlinear quantum properties, such as purity, quantum virtual cooling, and R\'{e}nyi-$2$ correlation.
Related papers
- Reducing the Complexity of Matrix Multiplication to $O(N^2log_2N)$ by an Asymptotically Optimal Quantum Algorithm [3.5649163180026484]
This work presents a quantum kernel-based matrix multiplication algorithm (QKMM)<n>It achieves anally optimal computational complexity of $ O(N2 log N) $, outperforming the classical optimal complexity of $ O(N2.371552) $.
arXiv Detail & Related papers (2026-02-05T10:58:52Z) - A Polynomial-time Algorithm for Online Sparse Linear Regression with Improved Regret Bound under Weaker Conditions [75.69959433669244]
We study the problem of online sparse linear regression (OSLR) where the algorithms are restricted to accessing only $k$ out of $d$ per instance for prediction.<n>We introduce a new extend-time algorithm, which significantly improves previous regret bounds.
arXiv Detail & Related papers (2025-10-31T05:02:33Z) - Matrix Inversion by Quantum Walk [0.0]
The HHL algorithm for matrix inversion is a landmark algorithm in quantum computation.<n>We substantially simplify the algorithm by replacing the phase estimations with a continuous time quantum walk.
arXiv Detail & Related papers (2025-08-08T18:00:05Z) - A probabilistic quantum algorithm for Lyapunov equations and matrix inversion [0.0]
We present a probabilistic quantum algorithm for preparing mixed states proportional to the solutions of Lyapunov equations.<n>At each step the algorithm either returns the current state, applies a trace non-increasing completely positive map, or restarts depending on the outcomes of a biased coin flip and an ancilla measurement.<n>In its most general form, the algorithm generates mixed states which approximate matrix-valued weighted sums and integrals.
arXiv Detail & Related papers (2025-08-06T17:52:06Z) - Quantum Non-Linear Bandit Optimization [2.7718037613443127]
We study non-linear bandit optimization where the learner maximizes a black-box function with zeroth order function oracle.<n>We propose the new Q-NLB-UCB algorithm which uses the novel parametric function approximation technique.<n>We prove that the regret bound of Q-NLB-UCB is not only $O(mathrmpolylog T)$ but also input dimension-free.
arXiv Detail & Related papers (2025-03-04T21:53:33Z) - Q-Newton: Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent [37.59299233291882]
We propose Q-Newton, a hybrid quantum-classical scheduler for accelerating neural network training with Newton's GD.<n>Q-Newton utilizes a streamlined scheduling module that coordinates between quantum and classical linear solvers.<n>Our evaluation showcases the potential for Q-Newton to significantly reduce the total training time compared to commonly used quantum machines.
arXiv Detail & Related papers (2024-04-30T23:55:03Z) - Provably Efficient Exploration in Quantum Reinforcement Learning with Logarithmic Worst-Case Regret [23.418957451727255]
We propose a novel UCRL-style algorithm for quantum reinforcement learning (RL)
We establish an $mathcalO(mathrmpoly(S, A, H, log T))$ worst-case regret for it, where $T$ is the number of episodes.
Specifically, we develop a quantum algorithm based on value target regression (VTR) for linear mixture MDPs with $d$-dimensional linear representation.
arXiv Detail & Related papers (2023-02-21T16:23:11Z) - Refined Regret for Adversarial MDPs with Linear Function Approximation [50.00022394876222]
We consider learning in an adversarial Decision Process (MDP) where the loss functions can change arbitrarily over $K$ episodes.
This paper provides two algorithms that improve the regret to $tildemathcal O(K2/3)$ in the same setting.
arXiv Detail & Related papers (2023-01-30T14:37:21Z) - Softmax-free Linear Transformers [90.83157268265654]
Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks.
Existing methods are either theoretically flawed or empirically ineffective for visual recognition.
We propose a family of Softmax-Free Transformers (SOFT)
arXiv Detail & Related papers (2022-07-05T03:08:27Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Hutch++: Optimal Stochastic Trace Estimation [75.45968495410048]
We introduce a new randomized algorithm, Hutch++, which computes a $(1 pm epsilon)$ approximation to $tr(A)$ for any positive semidefinite (PSD) $A$.
We show that it significantly outperforms Hutchinson's method in experiments.
arXiv Detail & Related papers (2020-10-19T16:45:37Z) - Quantum Gram-Schmidt Processes and Their Application to Efficient State
Read-out for Quantum Algorithms [87.04438831673063]
We present an efficient read-out protocol that yields the classical vector form of the generated state.
Our protocol suits the case that the output state lies in the row space of the input matrix.
One of our technical tools is an efficient quantum algorithm for performing the Gram-Schmidt orthonormal procedure.
arXiv Detail & Related papers (2020-04-14T11:05:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.