Domain-Specific Quantum Architecture Optimization
- URL: http://arxiv.org/abs/2207.14482v1
- Date: Fri, 29 Jul 2022 05:16:02 GMT
- Title: Domain-Specific Quantum Architecture Optimization
- Authors: Wan-Hsuan Lin, Bochen Tan, Murphy Yuezhen Niu, Jason Kimko, and Jason
Cong
- Abstract summary: We present a framework for optimizing quantum architectures, specifically through customizing qubit connectivity.
It is the first work that provides performance guarantees by integrating architecture optimization with an optimal compiler.
We demonstrate up to 59% fidelity improvement in simulation by optimizing the heavy-hexagon architecture for QAOA circuits, and up to 14% improvement on the grid architecture.
- Score: 7.274584978257831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the steady progress in quantum computing over recent years, roadmaps for
upscaling quantum processors have relied heavily on the targeted qubit
architectures. So far, similarly to the early age of classical computing, these
designs have been crafted by human experts. These general-purpose
architectures, however, leave room for customization and optimization,
especially when targeting popular near-term QC applications. In classical
computing, customized architectures have demonstrated significant performance
and energy efficiency gains over general-purpose counterparts. In this paper,
we present a framework for optimizing quantum architectures, specifically
through customizing qubit connectivity. It is the first work that (1) provides
performance guarantees by integrating architecture optimization with an optimal
compiler, (2) evaluates the impact of connectivity customization under a
realistic crosstalk error model, and (3) benchmarks on realistic circuits of
near-term interest, such as the quantum approximate optimization algorithm
(QAOA) and quantum convolutional neural network (QCNN). We demonstrate up to
59% fidelity improvement in simulation by optimizing the heavy-hexagon
architecture for QAOA circuits, and up to 14% improvement on the grid
architecture. For the QCNN circuit, architecture optimization improves fidelity
by 11% on the heavy-hexagon architecture and 605% on the grid architecture.
Related papers
- Comparison of Superconducting NISQ Architectures [0.0]
We study superconducting architectures including Google's Sycamore, IBM's Heavy-Hex, Rigetti's Aspen, and Ankaa.
We also study compilation tools that target these architectures.
arXiv Detail & Related papers (2024-09-03T17:12:08Z) - ArtA: Automating Design Space Exploration of Spin Qubit Architectures [1.1528488253382057]
This paper introduces the first Design Space Exploration (DSE) for quantum-dot spin-qubit architectures.
ArtA can leverage 17 optimization configurations, significantly reducing exploration times by up to 99.1%.
Our work demonstrates that the synergy between DSE methodologies and optimization algorithms can effectively be deployed to provide useful suggestions to quantum processor designers.
arXiv Detail & Related papers (2024-07-25T16:02:44Z) - Mechanistic Design and Scaling of Hybrid Architectures [114.3129802943915]
We identify and test new hybrid architectures constructed from a variety of computational primitives.
We experimentally validate the resulting architectures via an extensive compute-optimal and a new state-optimal scaling law analysis.
We find MAD synthetics to correlate with compute-optimal perplexity, enabling accurate evaluation of new architectures.
arXiv Detail & Related papers (2024-03-26T16:33:12Z) - Optimizing Quantum Algorithms on Bipotent Architectures [0.0]
We investigate the trade-off between hardware-level and algorithm-level improvements on bipotent quantum architectures.
Our results indicate that the benefits of pulse-level optimizations currently outweigh the improvements due to vigorously optimized monolithic gates.
arXiv Detail & Related papers (2023-03-23T08:57:06Z) - Rethinking Architecture Selection in Differentiable NAS [74.61723678821049]
Differentiable Neural Architecture Search is one of the most popular NAS methods for its search efficiency and simplicity.
We propose an alternative perturbation-based architecture selection that directly measures each operation's influence on the supernet.
We find that several failure modes of DARTS can be greatly alleviated with the proposed selection method.
arXiv Detail & Related papers (2021-08-10T00:53:39Z) - iDARTS: Differentiable Architecture Search with Stochastic Implicit
Gradients [75.41173109807735]
Differentiable ARchiTecture Search (DARTS) has recently become the mainstream of neural architecture search (NAS)
We tackle the hypergradient computation in DARTS based on the implicit function theorem.
We show that the architecture optimisation with the proposed method, named iDARTS, is expected to converge to a stationary point.
arXiv Detail & Related papers (2021-06-21T00:44:11Z) - Rethinking Co-design of Neural Architectures and Hardware Accelerators [31.342964958282092]
We systematically study the importance and strategies of co-designing neural architectures and hardware accelerators.
Our experiments show that the joint search method consistently outperforms previous platform-aware neural architecture search.
Our method can reduce energy consumption of an edge accelerator by up to 2x under the same accuracy constraint.
arXiv Detail & Related papers (2021-02-17T07:55:58Z) - Apollo: Transferable Architecture Exploration [26.489275442359464]
We propose a transferable architecture exploration framework, dubbed Apollo.
We show that our framework finds high reward design configurations more sample-efficiently than a baseline black-box optimization approach.
arXiv Detail & Related papers (2021-02-02T19:36:02Z) - Off-Policy Reinforcement Learning for Efficient and Effective GAN
Architecture Search [50.40004966087121]
We introduce a new reinforcement learning based neural architecture search (NAS) methodology for generative adversarial network (GAN) architecture search.
The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling.
We exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies.
arXiv Detail & Related papers (2020-07-17T18:29:17Z) - Cyclic Differentiable Architecture Search [99.12381460261841]
Differentiable ARchiTecture Search, i.e., DARTS, has drawn great attention in neural architecture search.
We propose new joint objectives and a novel Cyclic Differentiable ARchiTecture Search framework, dubbed CDARTS.
In the DARTS search space, we achieve 97.52% top-1 accuracy on CIFAR10 and 76.3% top-1 accuracy on ImageNet.
arXiv Detail & Related papers (2020-06-18T17:55:19Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.