Comparing and correcting robustness metrics for quantum optimal control
- URL: http://arxiv.org/abs/2602.10349v1
- Date: Tue, 10 Feb 2026 22:44:16 GMT
- Title: Comparing and correcting robustness metrics for quantum optimal control
- Authors: Andrew T. Kamen, Samuel Fine, Bikrant Bhattacharyya, Frederic T. Chong, Andy J. Goldschmidt,
- Abstract summary: We present a novel, systematic study demonstrating important numerical differences between adjoint end-point and toggling-frame approaches.<n>We also introduce a critical discretization correction to the widely-used robustness-frame estimator.<n>Our approach uniquely handles control and fidelity constraints while cleanly isolating robustness for dedicated optimization.
- Score: 1.6927349660459692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Control pulses that nominally optimize fidelity are sensitive to routine hardware drift and modeling errors. Robust quantum optimal control seeks error-insensitive control pulses that maintain fidelity thresholds and obey hardware constraints. Distinct numerical approximations to the first-order error susceptibility include adjoint end-point and toggling-frame approaches. Although theoretically equivalent, we provide a novel, systematic study demonstrating important numerical differences between these two approaches. We also introduce a critical discretization correction to the widely-used toggling-frame robustness estimator, measurably improving its estimate of first-order error susceptibility. We accomplish our study by positioning robustness as a first-class objective within direct, constrained optimal control. Our approach uniquely handles control and fidelity constraints while cleanly isolating robustness for dedicated optimization. In both single- and two-qubit examples under realistic constraints, our approach provides an analytic edge for obtaining precise, physics-informed robustness.
Related papers
- Rectified Robust Policy Optimization for Model-Uncertain Constrained Reinforcement Learning without Strong Duality [53.525547349715595]
We propose a novel primal-only algorithm called Rectified Robust Policy Optimization (RRPO)<n>RRPO operates directly on the primal problem without relying on dual formulations.<n>We show convergence to an approximately optimal feasible policy with complexity matching the best-known lower bound.
arXiv Detail & Related papers (2025-08-24T16:59:38Z) - C-Learner: Constrained Learning for Causal Inference [4.370964009390564]
We propose a novel debiasing approach that achieves the best weighting of both worlds, producing stable plug-in estimates.<n>Our constrained learning framework solves for the best plug-in estimator under the constraint that the first-order error with respect to the plugged-in quantity is zero.
arXiv Detail & Related papers (2024-05-15T16:38:28Z) - Robustness of Dynamic Quantum Control: Differential Sensitivity Bound [0.0]
A new robustness measure based on the differential sensitivity of the gate fidelity error to parametric uncertainties is introduced.
It is shown how a maximum allowable perturbation over a set of Hamiltonian uncertainties that guarantees a given fidelity error, can be reliably computed.
arXiv Detail & Related papers (2023-12-30T18:36:53Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Actively Learning Reinforcement Learning: A Stochastic Optimal Control Approach [3.453622106101339]
We propose a framework towards achieving two intertwined objectives: (i) equipping reinforcement learning with active exploration and deliberate information gathering, and (ii) overcoming the computational intractability of optimal control law.
We approach both objectives by using reinforcement learning to compute the optimal control law.
Unlike fixed exploration and exploitation balance, caution and probing are employed automatically by the controller in real-time, even after the learning process is terminated.
arXiv Detail & Related papers (2023-09-18T18:05:35Z) - Constrained Reinforcement Learning using Distributional Representation for Trustworthy Quadrotor UAV Tracking Control [2.325021848829375]
We propose a novel trajectory tracker integrating a Distributional Reinforcement Learning disturbance estimator for unknown aerodynamic effects.
The proposed estimator Constrained Distributional Reinforced disturbance estimator' (ConsDRED) accurately identifies uncertainties between true and estimated values of aerodynamic effects.
We demonstrate our system improves accumulative tracking errors by at least 70% compared with the recent art.
arXiv Detail & Related papers (2023-02-22T23:15:56Z) - Optimal control for state preparation in two-qubit open quantum systems
driven by coherent and incoherent controls via GRAPE approach [77.34726150561087]
We consider a model of two qubits driven by coherent and incoherent time-dependent controls.
The dynamics of the system is governed by a Gorini-Kossakowski-Sudarshan-Lindblad master equation.
We study evolution of the von Neumann entropy, purity, and one-qubit reduced density matrices under optimized controls.
arXiv Detail & Related papers (2022-11-04T15:20:18Z) - Robust Quantum Control: Analysis & Synthesis via Averaging [0.2320417845168326]
An approach is presented for robustness analysis and quantum (unitary) control synthesis based on the classic method of averaging.
The result is a multicriterion optimization competing the nominal (uncertainty-free) fidelity with a well known robustness measure: the size of an interaction (error) Hamiltonian.
arXiv Detail & Related papers (2022-08-30T12:09:40Z) - Statistically Characterising Robustness and Fidelity of Quantum Controls
and Quantum Control Algorithms [0.5599792629509229]
The robustness-infidelity measure (RIM$_p$) is introduced to quantify the robustness and fidelity of a controller.
Based on the RIM$_p$, an algorithmic robustness-infidelity measure (ARIM) is developed to quantify the expected robustness and fidelity of controllers.
arXiv Detail & Related papers (2022-07-16T01:19:57Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.