Exact and Asymptotically Complete Robust Verifications of Neural Networks via Quantum Optimization
- URL: http://arxiv.org/abs/2603.00408v1
- Date: Sat, 28 Feb 2026 02:05:02 GMT
- Title: Exact and Asymptotically Complete Robust Verifications of Neural Networks via Quantum Optimization
- Authors: Wenxin Li, Wenchao Liu, Chuan Wang, Qi Gao, Yin Ma, Hai Wei, Kai Wen,
- Abstract summary: We introduce two quantum-optimization-based models for robust verification of deep neural networks.<n>Experiments on benchmarks show high certification accuracy, indicating that quantum optimization can serve as a principled primitive for robustness guarantees.
- Score: 9.728049285140736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) enable high performance across domains but remain vulnerable to adversarial perturbations, limiting their use in safety-critical settings. Here, we introduce two quantum-optimization-based models for robust verification that reduce the combinatorial burden of certification under bounded input perturbations. For piecewise-linear activations (e.g., ReLU and hardtanh), our first model yields an exact formulation that is sound and complete, enabling precise identification of adversarial examples. For general activations (including sigmoid and tanh), our second model constructs scalable over-approximations via piecewise-constant bounds and is asymptotically complete, with approximation error vanishing as the segmentation is refined. We further integrate Quantum Benders Decomposition with interval arithmetic to accelerate solving, and propose certificate-transfer bounds that relate robustness guarantees of pruned networks to those of the original model. Finally, a layerwise partitioning strategy supports a quantum--classical hybrid workflow by coupling subproblems across depth. Experiments on robustness benchmarks show high certification accuracy, indicating that quantum optimization can serve as a principled primitive for robustness guarantees in neural networks with complex activations.
Related papers
- Continual Quantum Architecture Search with Tensor-Train Encoding: Theory and Applications to Signal Processing [68.35481158940401]
CL-QAS is a continual quantum architecture search framework.<n>It mitigates challenges of costly encoding amplitude and forgetting in variational quantum circuits.<n>It achieves controllable robustness expressivity, sample-efficient generalization, and smooth convergence without barren plateaus.
arXiv Detail & Related papers (2026-01-10T02:36:03Z) - Scalable Quantum Walk-Based Heuristics for the Minimum Vertex Cover Problem [0.0]
We propose a novel quantum algorithm for the Minimum Vertex Cover (MVC) problem based on continuous-time quantum walks (CTQWs)<n>In this framework, the coherent propagation of a quantum walker over a graph encodes its structural properties into state amplitudes.<n>We show that the CTQW-based algorithm consistently achieves superior approximation ratios and exhibits remarkable robustness with respect to network topology.
arXiv Detail & Related papers (2025-12-02T17:04:57Z) - Neural-Quantum-States Impurity Solver for Quantum Embedding Problems [2.7022139554331264]
We introduce a graph transformer-based NQS framework to represent arbitrarily connected impurity orbitals.<n>We develop an error control mechanism to stabilise iterative updates throughout the quantum embedding loops.
arXiv Detail & Related papers (2025-09-15T20:33:10Z) - Progressive Element-wise Gradient Estimation for Neural Network Quantization [2.1413624861650358]
Quantization-Aware Training (QAT) methods rely on the Straight-Through Estimator (STE) to address the non-differentiability of discretization functions.<n>We propose Progressive Element-wise Gradient Estimation (PEGE) to address discretization errors between continuous and quantized values.<n>PEGE consistently outperforms existing backpropagation methods and enables low-precision models to match or even outperform the accuracy of their full-precision counterparts.
arXiv Detail & Related papers (2025-08-27T15:59:36Z) - Segmentation-Based Regression for Quantum Neural Networks [0.0]
Recent advances in quantum hardware motivate the development of algorithmic frameworks that integrate quantum sampling with classical inference.<n>This work introduces a segmentation-based regression method tailored to quantum neural networks (QNNs)<n>By casting the regression task as a constrained problem over a structured digit lattice, the method replaces continuous inference with interpretable and tractable updates.
arXiv Detail & Related papers (2025-06-27T20:11:43Z) - Quantum-Classical Hybrid Quantized Neural Network [8.382617481718643]
We present a novel Quadratic Binary Optimization (QBO) model for quantized neural network training, enabling the use of arbitrary activation and loss functions.<n>We employ the Quantum Gradient Conditional Descent (QCGD) algorithm, which leverages quantum computing to directly solve the QCBO problem.
arXiv Detail & Related papers (2025-06-23T02:12:36Z) - Mitigating Barren Plateaus in Quantum Neural Networks via an AI-Driven Submartingale-Based Framework [3.0617189749929348]
We propose AdaInit to mitigate barren plateaus (BPs) in quantum neural networks (QNNs)<n>AdaInit iteratively synthesizes initial parameters for QNNs that yield non-negligible gradient variance, thereby mitigating BPs.<n>We provide rigorous theoretical analyses of the submartingale-based process and empirically validate that AdaInit consistently outperforms existing methods in maintaining higher gradient variance across various QNN scales.
arXiv Detail & Related papers (2025-02-17T05:57:15Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - The Sample Complexity of One-Hidden-Layer Neural Networks [57.6421258363243]
We study a class of scalar-valued one-hidden-layer networks, and inputs bounded in Euclidean norm.
We prove that controlling the spectral norm of the hidden layer weight matrix is insufficient to get uniform convergence guarantees.
We analyze two important settings where a mere spectral norm control turns out to be sufficient.
arXiv Detail & Related papers (2022-02-13T07:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.