When Does Adaptation Win? Scaling Laws for Meta-Learning in Quantum Control
- URL: http://arxiv.org/abs/2601.18973v3
- Date: Wed, 04 Feb 2026 00:14:39 GMT
- Title: When Does Adaptation Win? Scaling Laws for Meta-Learning in Quantum Control
- Authors: Nima Leclerc, Chris Miller, Nicholas Brawand,
- Abstract summary: Quantum hardware suffers from intrinsic device heterogeneity and environmental drift.<n>We derive a scaling law lower bound for meta-learning showing that the adaptation gain saturates exponentially with gradient steps.<n>Further validation on classical linear-quadratic control confirms these laws emerge from general optimization geometry rather than quantum-specific physics.
- Score: 0.41998444721319217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum hardware suffers from intrinsic device heterogeneity and environmental drift, forcing practitioners to choose between suboptimal non-adaptive controllers or costly per-device recalibration. We derive a scaling law lower bound for meta-learning showing that the adaptation gain (expected fidelity improvement from task-specific gradient steps) saturates exponentially with gradient steps and scales linearly with task variance, providing a quantitative criterion for when adaptation justifies its overhead. Validation on quantum gate calibration shows negligible benefits for low-variance tasks but $>40\%$ fidelity gains on two-qubit gates under extreme out-of-distribution conditions (10$\times$ the training noise), with implications for reducing per-device calibration time on cloud quantum processors. Further validation on classical linear-quadratic control confirms these laws emerge from general optimization geometry rather than quantum-specific physics. Together, these results offer a transferable framework for decision-making in adaptive control.
Related papers
- QTALE: Quantization-Robust Token-Adaptive Layer Execution for LLMs [0.0]
Large language models (LLMs) demand substantial computational and memory resources.<n>We propose QTALE, a novel framework that enables seamless integration of token-adaptive execution with quantization.
arXiv Detail & Related papers (2026-02-11T02:19:11Z) - Continual Quantum Architecture Search with Tensor-Train Encoding: Theory and Applications to Signal Processing [68.35481158940401]
CL-QAS is a continual quantum architecture search framework.<n>It mitigates challenges of costly encoding amplitude and forgetting in variational quantum circuits.<n>It achieves controllable robustness expressivity, sample-efficient generalization, and smooth convergence without barren plateaus.
arXiv Detail & Related papers (2026-01-10T02:36:03Z) - Escaping Barren Plateaus in Variational Quantum Algorithms Using Negative Learning Rate in Quantum Internet of Things [8.98664000532717]
Variational Quantum Algorithms (VQAs) are becoming the primary computational primitive for next-generation quantum computers.<n>Under device-constrained execution conditions, the scalability of learning is severely limited by barren plateaus.<n>We present a novel approach for escaping barren plateaus by including negative learning rates into the optimization process.
arXiv Detail & Related papers (2025-11-28T03:32:33Z) - CAGE: Curvature-Aware Gradient Estimation For Accurate Quantization-Aware Training [73.46600457802693]
We introduce a new method that counteracts the loss induced by quantization.<n>CAGE significantly improves upon the state-of-theart methods in terms of accuracy, for similar computational cost.<n>For QAT pre-training of Llama models, CAGE matches the accuracy achieved at 4-bits (W4A4) with the prior best method.
arXiv Detail & Related papers (2025-10-21T16:33:57Z) - End-to-End On-Device Quantization-Aware Training for LLMs at Inference Cost [53.25965863436039]
Quantization-aware training (QAT) provides a more principled solution, but its reliance on backpropagation incurs prohibitive memory costs.<n>We propose ZeroQAT, a zeroth-order optimization-based QAT framework that supports both weight and activation quantization.<n>Experiments show that ZeroQAT consistently outperforms representative PTQ and QAT baselines while requiring significantly less memory.
arXiv Detail & Related papers (2025-08-21T01:18:27Z) - Sculpting Quantum Landscapes: Fubini-Study Metric Conditioning for Geometry Aware Learning in Parameterized Quantum Circuits [0.0]
We present a novel meta learning framework called Sculpture that explicitly conditions the Fubini Study metric tensor to mitigate barren plateaus in variational quantum algorithms.<n>Our theoretical analysis identifies the logarithmic condition number of the Fubini Study metric as a critical geometric quantity governing trainability, optimization dynamics, and generalization.
arXiv Detail & Related papers (2025-06-27T06:30:33Z) - Compressing Sine-Activated Low-Rank Adapters through Post-Training Quantization [25.441086332799348]
Low-Rank Adaptation (LoRA) has become a standard approach for parameter-efficient fine-tuning.<n>We extend the sinusoidal transformation framework to quantized LoRA adapters.
arXiv Detail & Related papers (2025-05-28T02:15:15Z) - Adaptive folding and noise filtering for robust quantum error mitigation [0.0]
This paper presents noise-adaptive folding, a technique that enhances zero-noise extrapolation (ZNE)<n>We introduce two filtering methods: one relies on measuring error strength, while the other utilizes statistical filtering to improve the extrapolation process.<n>Our findings demonstrate that these adaptive methods effectively strengthen error mitigation against noise fluctuations, thereby enhancing the precision and reliability of quantum computations.
arXiv Detail & Related papers (2025-05-07T14:35:01Z) - Near-Term Distributed Quantum Computation using Mean-Field Corrections
and Auxiliary Qubits [77.04894470683776]
We propose near-term distributed quantum computing that involve limited information transfer and conservative entanglement production.
We build upon these concepts to produce an approximate circuit-cutting technique for the fragmented pre-training of variational quantum algorithms.
arXiv Detail & Related papers (2023-09-11T18:00:00Z) - Towards Accurate Post-Training Quantization for Vision Transformer [48.779346466374406]
Existing post-training quantization methods still cause severe performance drops.
APQ-ViT surpasses the existing post-training quantization methods by convincing margins.
arXiv Detail & Related papers (2023-03-25T03:05:26Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - Post-selection-free preparation of high-quality physical qubits [0.0]
We present a family of quantum circuits that prepare high-quality |0> states without post-selection.
We find meaningful performance enhancements when two-qubit gate fidelities errors go below 0.2%.
arXiv Detail & Related papers (2022-09-12T16:42:33Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.