Comparison of Superconducting NISQ Architectures
- URL: http://arxiv.org/abs/2409.02063v1
- Date: Tue, 3 Sep 2024 17:12:08 GMT
- Title: Comparison of Superconducting NISQ Architectures
- Authors: Benjamin Rempfer, Kevin Obenland,
- Abstract summary: We study superconducting architectures including Google's Sycamore, IBM's Heavy-Hex, Rigetti's Aspen, and Ankaa.
We also study compilation tools that target these architectures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in quantum hardware have begun the noisy intermediate-scale quantum (NISQ) computing era. A pressing question is: what architectures are best suited to take advantage of this new regime of quantum machines? We study various superconducting architectures including Google's Sycamore, IBM's Heavy-Hex, Rigetti's Aspen, and Ankaa in addition to a proposed architecture we call bus next-nearest neighbor (busNNN). We evaluate these architectures using benchmarks based on the quantum approximate optimization algorithm (QAOA) which can solve certain quadratic unconstrained binary optimization (QUBO) problems. We also study compilation tools that target these architectures, which use either general heuristic or deterministic methods to map circuits onto a target topology defined by an architecture.
Related papers
- Quantum random access memory architectures using superconducting cavities [0.0]
We propose two bucket-brigade QRAM architectures based on high-coherence superconducting resonators.
We analyze single-rail and dual-rail implementations of a bosonic qubit.
For parameter regimes of interest the post-selected infidelity of a QRAM query in a dual-rail architecture is nearly an order of magnitude below that of a corresponding query in a single-rail architecture.
arXiv Detail & Related papers (2023-10-12T12:45:39Z) - QNEAT: Natural Evolution of Variational Quantum Circuit Architecture [95.29334926638462]
We focus on variational quantum circuits (VQC), which emerged as the most promising candidates for the quantum counterpart of neural networks.
Although showing promising results, VQCs can be hard to train because of different issues, e.g., barren plateau, periodicity of the weights, or choice of architecture.
We propose a gradient-free algorithm inspired by natural evolution to optimize both the weights and the architecture of the VQC.
arXiv Detail & Related papers (2023-04-14T08:03:20Z) - Mapping quantum algorithms to multi-core quantum computing architectures [1.8602413562219944]
Multi-core quantum computer architecture poses new challenges such as expensive inter-core communication.
A detailed critical discussion of the quantum circuit mapping problem for multi-core quantum computing architectures is provided.
We further explore the performance of a mapping method, which is formulated as a partitioning over time graph problem.
arXiv Detail & Related papers (2023-03-28T16:46:59Z) - SpinQ: Compilation strategies for scalable spin-qubit architectures [1.236829197968612]
We discuss the unique mapping challenges of a scalable crossbar architecture with shared control.
We introduce SpinQ, the first native compilation framework for scalable spin-qubit architectures.
arXiv Detail & Related papers (2023-01-30T19:10:23Z) - The Basis of Design Tools for Quantum Computing: Arrays, Decision
Diagrams, Tensor Networks, and ZX-Calculus [55.58528469973086]
Quantum computers promise to efficiently solve important problems classical computers never will.
A fully automated quantum software stack needs to be developed.
This work provides a look "under the hood" of today's tools and showcases how these means are utilized in them, e.g., for simulation, compilation, and verification of quantum circuits.
arXiv Detail & Related papers (2023-01-10T19:00:00Z) - On Optimal Subarchitectures for Quantum Circuit Mapping [3.610459670994051]
One step in compiling a quantum circuit to some device is quantum circuit mapping.
Because the search space in quantum circuit mapping grows in the number of qubits, it is desirable to consider as few physical qubits as possible.
We show that determining subarchitectures that are of minimal size, i.e., of which no physical qubit can be removed without losing the optimal mapping solution for some quantum circuit, is a very hard problem.
arXiv Detail & Related papers (2022-10-17T18:00:02Z) - Domain-Specific Quantum Architecture Optimization [7.274584978257831]
We present a framework for optimizing quantum architectures, specifically through customizing qubit connectivity.
It is the first work that provides performance guarantees by integrating architecture optimization with an optimal compiler.
We demonstrate up to 59% fidelity improvement in simulation by optimizing the heavy-hexagon architecture for QAOA circuits, and up to 14% improvement on the grid architecture.
arXiv Detail & Related papers (2022-07-29T05:16:02Z) - Scaling Quantum Approximate Optimization on Near-term Hardware [49.94954584453379]
We quantify scaling of the expected resource requirements by optimized circuits for hardware architectures with varying levels of connectivity.
We show the number of measurements, and hence total time to synthesizing solution, grows exponentially in problem size and problem graph degree.
These problems may be alleviated by increasing hardware connectivity or by recently proposed modifications to the QAOA that achieve higher performance with fewer circuit layers.
arXiv Detail & Related papers (2022-01-06T21:02:30Z) - Rethinking Architecture Selection in Differentiable NAS [74.61723678821049]
Differentiable Neural Architecture Search is one of the most popular NAS methods for its search efficiency and simplicity.
We propose an alternative perturbation-based architecture selection that directly measures each operation's influence on the supernet.
We find that several failure modes of DARTS can be greatly alleviated with the proposed selection method.
arXiv Detail & Related papers (2021-08-10T00:53:39Z) - iDARTS: Differentiable Architecture Search with Stochastic Implicit
Gradients [75.41173109807735]
Differentiable ARchiTecture Search (DARTS) has recently become the mainstream of neural architecture search (NAS)
We tackle the hypergradient computation in DARTS based on the implicit function theorem.
We show that the architecture optimisation with the proposed method, named iDARTS, is expected to converge to a stationary point.
arXiv Detail & Related papers (2021-06-21T00:44:11Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.