Pattern Tree: Enhancing Efficiency in Quantum Circuit Optimization Based on Pattern-matching
- URL: http://arxiv.org/abs/2412.07803v1
- Date: Mon, 09 Dec 2024 07:21:11 GMT
- Title: Pattern Tree: Enhancing Efficiency in Quantum Circuit Optimization Based on Pattern-matching
- Authors: Mingyu Chen, Yu Zhang, Zhaoyu Zheng, Yongshang Li, Haoning Deng,
- Abstract summary: We propose a novel framework for quantum circuit optimization based on pattern matching to enhance its efficiency.
We show that pattern-tree-based pattern matching can reduce execution time by an average of 20% on a well-accepted benchmark set.
- Score: 3.2801774304960447
- License:
- Abstract: Quantum circuit optimization is essential for improving the performance of quantum algorithms, particularly on Noisy Intermediate-Scale Quantum (NISQ) devices with limited qubit connectivity and high error rates. Pattern matching has proven to be an effective technique for identifying and optimizing subcircuits by replacing them with functionally equivalent, efficient versions, including reducing circuit depth and facilitating platform portability. However, existing approaches face challenges in handling large-scale circuits and numerous transformation rules, often leading to redundant matches and increased compilation time. In this study, we propose a novel framework for quantum circuit optimization based on pattern matching to enhance its efficiency. Observing redundancy in applying existing transformation rules, our method employs a pattern tree structure to organize these rules, reducing redundant operations during the execution of the pattern-matching algorithm and improving matching efficiency. We design and implement a compilation framework to demonstrate the practicality of the pattern tree approach. Experimental results show that pattern-tree-based pattern matching can reduce execution time by an average of 20% on a well-accepted benchmark set. Furthermore, we analyze how to build a pattern tree to maximize the optimization of compilation time. The evaluation results demonstrate that our approach has the potential to optimize compilation time by 90%.
Related papers
- Hybrid discrete-continuous compilation of trapped-ion quantum circuits with deep reinforcement learning [1.7087507417780985]
We show that we can significantly reduce the size of relevant quantum circuits for trapped-ion computing.
Our framework can also be applied to an experimental setup whose goal is to reproduce an unknown unitary process.
arXiv Detail & Related papers (2023-07-12T14:55:28Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - ParaFormer: Parallel Attention Transformer for Efficient Feature
Matching [8.552303361149612]
This paper proposes a novel parallel attention model entitled ParaFormer.
It fuses features and keypoint positions through the concept of amplitude and phase, and integrates self- and cross-attention in a parallel manner.
Experiments on various applications, including homography estimation, pose estimation, and image matching, demonstrate that ParaFormer achieves state-of-the-art performance.
The efficient ParaFormer-U variant achieves comparable performance with less than 50% FLOPs of the existing attention-based models.
arXiv Detail & Related papers (2023-03-02T03:29:16Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - ECO-TR: Efficient Correspondences Finding Via Coarse-to-Fine Refinement [80.94378602238432]
We propose an efficient structure named Correspondence Efficient Transformer (ECO-TR) by finding correspondences in a coarse-to-fine manner.
To achieve this, multiple transformer blocks are stage-wisely connected to gradually refine the predicted coordinates.
Experiments on various sparse and dense matching tasks demonstrate the superiority of our method in both efficiency and effectiveness against existing state-of-the-arts.
arXiv Detail & Related papers (2022-09-25T13:05:33Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Recommender System Expedited Quantum Control Optimization [0.0]
Quantum control optimization algorithms are routinely used to generate optimal quantum gates or efficient quantum state transfers.
There are two main challenges in designing efficient optimization algorithms, namely overcoming the sensitivity to local optima and improving the computational speed.
Here, we propose and demonstrate the use of a machine learning method, specifically the recommender system (RS), to deal with the latter challenge.
arXiv Detail & Related papers (2022-01-29T10:25:41Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Adaptive Discretization for Model-Based Reinforcement Learning [10.21634042036049]
We introduce the technique of adaptive discretization to design an efficient model-based episodic reinforcement learning algorithm.
Our algorithm is based on optimistic one-step value iteration extended to maintain an adaptive discretization of the space.
arXiv Detail & Related papers (2020-07-01T19:36:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.