Compile-once block encodings for masked similarity-transformed effective Hamiltonians
- URL: http://arxiv.org/abs/2603.00761v1
- Date: Sat, 28 Feb 2026 18:05:51 GMT
- Title: Compile-once block encodings for masked similarity-transformed effective Hamiltonians
- Authors: Bo Peng, Yuan Liu, Karol Kowalski,
- Abstract summary: We present COMPOSER, a compile-once modular parametric parametric for similarity-encoded oracle effective reduction of electronic-structure operators.<n>Low-rank factorization compress Hamiltonians and anti-Hermitian generators into rank-one bilinear and projected-quadratic ladders.<n>A fixed orbital pool and qubit register is compiled once; geometry, active-space (mask) updates, and truncations are absorbed by re-dialed single-qubit rotations.
- Score: 9.489652688191917
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present COMPOSER, a compile-once modular parametric oracle for similarity-encoded effective reduction of electronic-structure operators (e.g., Schrieffer-Wolff-type constructions). Low-rank factorizations compress Hamiltonians and anti-Hermitian generators into rank-one bilinear and projected-quadratic ladders with near-linear scaling at fixed thresholds; each ladder admits deterministic, number-conserving preparation and a block encoding using constant number of signal ancillas. A fixed PREP-SELECT-PREP template multiplexes these ladders, and one QSP polynomial performs the spectral transformation with degree set by operator norms. For a fixed orbital pool and qubit register, the two-qubit fabric is compiled once; geometry, active-space (mask) updates, and truncations are absorbed by re-dialed single-qubit rotations. We introduce a mask-aware similarity-sandwich effective-Hamiltonian construction and benchmark stability under low-rank and second-order-perturation-guided screening. COMPOSER is an execution architecture: algorithmic errors (block-encoding and QSP approximation) are tunable for any supplied parameters, while physical accuracy depends on how those parameters are obtained if not refined.
Related papers
- PRISM: Parallel Residual Iterative Sequence Model [52.26239951489612]
We propose PRISM (Parallel Residual Iterative Sequence Model) to resolve this tension.<n>PRISM introduces a solver-inspired inductive bias that captures key structural properties of multi-step refinement in a parallelizable form.<n>We prove that this formulation achieves Rank-$L$ accumulation, structurally expanding the update manifold beyond the single-step Rank-$1$ bottleneck.
arXiv Detail & Related papers (2026-02-11T12:39:41Z) - Block encoding of sparse matrices with a periodic diagonal structure [67.45502291821956]
We provide an explicit quantum circuit for block encoding a sparse matrix with a periodic diagonal structure.<n>Various applications for the presented methodology are discussed in the context of solving differential problems.
arXiv Detail & Related papers (2026-02-11T07:24:33Z) - Element-wise Modulation of Random Matrices for Efficient Neural Layers [0.0]
We propose a novel approach that decouples feature mixing from adaptation by utilizing a fixed random matrix modulated by lightweight, learnable element-wise parameters.<n>This architecture drastically reduces the trainable parameter count to a linear scale while retaining reliable accuracy across various benchmarks.
arXiv Detail & Related papers (2025-12-15T16:16:53Z) - Weighted Projective Line ZX Calculus: Quantized Orbifold Geometry for Quantum Compilation [0.764671395172401]
We develop a unified framework for quantum circuit compilation based on quantized orbifold phases and their diagrammatic semantics.<n>We show that these effects admit a natural description on the weighted projective line $mathbbP(a,b)$, whose orbifold points encode discrete phase grids.<n>We introduce the WPL--ZX calculus, an extension of the standard ZX formalism in which each spider carries a weight--phase--winding triple $(a,,k)$.
arXiv Detail & Related papers (2025-11-30T00:56:39Z) - Efficient Quantum Access Model for Sparse Structured Matrices using Linear Combination of Things [0.6138671548064355]
We present a novel framework for Linear Combination of Unitaries (LCU)-style decomposition tailored to structured sparse matrices.<n>LCU is a foundational primitive in both variational and fault-tolerant quantum algorithms.<n>We introduce the Sigma basis, a compact set of simple, non-unitary operators that can better capture sparsity and structure.
arXiv Detail & Related papers (2025-07-04T17:05:07Z) - Direct phase encoding in QAOA: Describing combinatorial optimization problems through binary decision variables [0.7015624626359264]
We show a more qubit-efficient circuit construction for optimization problems by the example of the Travelingperson Sales Problem (TSP)<n>Removing certain redundancies, the number of required qubits can be reduced by a linear factor compared to the aforementioned conventional encoding.<n>Our experiments show that for small instances results are just as accurate using our proposed encoding, whereas the number of required classical iterations increases only slightly.
arXiv Detail & Related papers (2024-12-10T12:12:34Z) - Block encoding bosons by signal processing [0.0]
We demonstrate that QSP-based techniques, such as Quantum Singular Value Transformation (QSVT) and Quantum Eigenvalue Transformation for Unitary Matrices (QETU) can themselves be efficiently utilized for BE implementation.<n>We present several examples of using QSVT and QETU algorithms, along with their combinations, to block encode Hamiltonians for lattice bosons.<n>We find that, while using QSVT for BE results in the best gate count scaling with the number of qubits per site, LOVE-LCU outperforms all other methods for operators acting on up to $lesssim11$ qubits.
arXiv Detail & Related papers (2024-08-29T18:00:02Z) - Tractable Bounding of Counterfactual Queries by Knowledge Compilation [51.47174989680976]
We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models.
A recently proposed iterated EM scheme yields an inner approximation of those bounds by sampling the initialisation parameters.
We show how a single symbolic knowledge compilation allows us to obtain the circuit structure with symbolic parameters to be replaced by their actual values.
arXiv Detail & Related papers (2023-10-05T07:10:40Z) - Blockwise Stochastic Variance-Reduced Methods with Parallel Speedup for
Multi-Block Bilevel Optimization [43.74656748515853]
Non-stationary multi-block bilevel optimization problems involve $mgg 1$ lower level problems and have important applications in machine learning.
We aim to achieve three properties for our algorithm: a) matching the state-of-the-art complexity of standard BO problems with a single block; (b) achieving parallel speedup by sampling $I$ samples for each sampled block per-iteration; and (c) avoiding the computation of the inverse of a high-dimensional Hessian matrix estimator.
arXiv Detail & Related papers (2023-05-30T04:10:11Z) - Factorizers for Distributed Sparse Block Codes [45.29870215671697]
We propose a fast and highly accurate method for factorizing distributed block codes (SBCs)
Our iterative factorizer introduces a threshold-based nonlinear activation, conditional random sampling, and an $ell_infty$-based similarity metric.
We demonstrate the feasibility of our method on four deep CNN architectures over CIFAR-100, ImageNet-1K, and RAVEN datasets.
arXiv Detail & Related papers (2023-03-24T12:31:48Z) - Exploring the role of parameters in variational quantum algorithms [59.20947681019466]
We introduce a quantum-control-inspired method for the characterization of variational quantum circuits using the rank of the dynamical Lie algebra.
A promising connection is found between the Lie rank, the accuracy of calculated energies, and the requisite depth to attain target states via a given circuit architecture.
arXiv Detail & Related papers (2022-09-28T20:24:53Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.