Mitigating the barren plateau problem in linear optics
- URL: http://arxiv.org/abs/2510.02430v1
- Date: Thu, 02 Oct 2025 18:00:00 GMT
- Title: Mitigating the barren plateau problem in linear optics
- Authors: Matthew D. Horner,
- Abstract summary: We demonstrate a significant speedup of variational quantum algorithms that use discrete variable boson sampling.<n>This results in a cost landscape with less local minima and barren plateaus regardless of the problem, ansatz or circuit layout.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We demonstrate a significant speedup of variational quantum algorithms that use discrete variable boson sampling when the parametrised phase shifters are constrained to have two distinct eigenvalues. This results in a cost landscape with less local minima and barren plateaus regardless of the problem, ansatz or circuit layout. This works without reliance on any classical pre-processing and allows for the fast gradient-free Rotosolve algorithm to be used. We propose three ways to achieve this by using either non-linear optics, measurement-induced non-linearities, or entangled resource states simulating fermionic statistics. The latter two require linear optics only, allowing for implementation with widely-available technology today. We show this outperforms the best-known boson sampling variational algorithm for all tests we conducted.
Related papers
- Exact gradients for linear optics with single photons [38.74529485263391]
We derive an analytical formula for the gradients in quantum circuits with respect to phaseshifters via a generalized parameter shift rule.<n>We propose two strategies through which one can reduce the number of shifts in the expression, and hence reduce the overall sample complexity.<n> Numerically, we show that this generalized parameter-shift rule can converge to the minimum of a cost function with fewer parameter update steps than alternative techniques.
arXiv Detail & Related papers (2024-09-24T18:02:06Z) - Accelerated First-Order Optimization under Nonlinear Constraints [61.98523595657983]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.<n>An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - A Neural Network Warm-Start Approach for the Inverse Acoustic Obstacle
Scattering Problem [7.624866197576227]
We present a neural network warm-start approach for solving the inverse scattering problem.
An initial guess for the optimization problem is obtained using a trained neural network.
The algorithm remains robust to noise in the scattered field measurements and also converges to the true solution for limited aperture data.
arXiv Detail & Related papers (2022-12-16T22:18:48Z) - Retrieving space-dependent polarization transformations via near-optimal
quantum process tomography [55.41644538483948]
We investigate the application of genetic and machine learning approaches to tomographic problems.
We find that the neural network-based scheme provides a significant speed-up, that may be critical in applications requiring a characterization in real-time.
We expect these results to lay the groundwork for the optimization of tomographic approaches in more general quantum processes.
arXiv Detail & Related papers (2022-10-27T11:37:14Z) - Non-convex Quadratic Programming Using Coherent Optical Networks [0.0]
We numerically benchmark solving box-constrained quadratic (BoxQP) problems using these settings.
In both cases the optical network is capable of solving BoxQP problems three magnitude faster than a state-of-the-art classical classical programming.
arXiv Detail & Related papers (2022-09-09T17:29:57Z) - Efficient and Flexible Sublabel-Accurate Energy Minimization [62.50191141358778]
We address the problem of minimizing a class of energy functions consisting of data and smoothness terms.
Existing continuous optimization methods can find sublabel-accurate solutions, but they are not efficient for large label spaces.
We propose an efficient sublabel-accurate method that utilizes the best properties of both continuous and discrete models.
arXiv Detail & Related papers (2022-06-20T06:58:55Z) - Stochastic Gradient Methods with Preconditioned Updates [47.23741709751474]
There are several algorithms for such problems, but existing methods often work poorly when badly scaled and/or ill-conditioned.
Here we include preconditionimater based on Hutchinson's approach to approxing the diagonal Hessian.
We prove convergence both when smoothness and the PL condition are assumed.
arXiv Detail & Related papers (2022-06-01T07:38:08Z) - Error-Correcting Neural Networks for Two-Dimensional Curvature
Computation in the Level-Set Method [0.0]
We present an error-neural-modeling-based strategy for approximating two-dimensional curvature in the level-set method.
Our main contribution is a redesigned hybrid solver that relies on numerical schemes to enable machine-learning operations on demand.
arXiv Detail & Related papers (2022-01-22T05:14:40Z) - Quantum state preparation and one qubit logic from third-order nonlinear
interactions [0.0]
We present a study on preparing and manipulating path-like temporal-mode (TM) qubits based on third-order nonlinear interactions.
Our study allows for experimentally feasible proposals capable of controllable arbitrary qubit transformations.
arXiv Detail & Related papers (2021-03-06T04:15:15Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.