Decohering Tensor Network Quantum Machine Learning Models
- URL: http://arxiv.org/abs/2209.01195v1
- Date: Fri, 2 Sep 2022 17:46:50 GMT
- Title: Decohering Tensor Network Quantum Machine Learning Models
- Authors: Haoran Liao, Ian Convy, Zhibo Yang, K. Birgitta Whaley
- Abstract summary: We investigate the competition between decoherence and adding ancillas on the classification performance of two models.
We present numerical evidence that the fully-decohered unitary tree tensor network (TTN) with two ancillas performs at least as well as the non-decohered unitary TTN.
- Score: 6.312362367148171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor network quantum machine learning (QML) models are promising
applications on near-term quantum hardware. While decoherence of qubits is
expected to decrease the performance of QML models, it is unclear to what
extent the diminished performance can be compensated for by adding ancillas to
the models and accordingly increasing the virtual bond dimension of the models.
We investigate here the competition between decoherence and adding ancillas on
the classification performance of two models, with an analysis of the
decoherence effect from the perspective of regression. We present numerical
evidence that the fully-decohered unitary tree tensor network (TTN) with two
ancillas performs at least as well as the non-decohered unitary TTN, suggesting
that it is beneficial to add at least two ancillas to the unitary TTN
regardless of the amount of decoherence may be consequently introduced.
Related papers
- Quantum-Train: Rethinking Hybrid Quantum-Classical Machine Learning in the Model Compression Perspective [7.7063925534143705]
We introduce the Quantum-Train(QT) framework, a novel approach that integrates quantum computing with machine learning algorithms.
QT achieves remarkable results by employing a quantum neural network alongside a classical mapping model.
arXiv Detail & Related papers (2024-05-18T14:35:57Z) - Towards Efficient Quantum Hybrid Diffusion Models [68.43405413443175]
We propose a new methodology to design quantum hybrid diffusion models.
We propose two possible hybridization schemes combining quantum computing's superior generalization with classical networks' modularity.
arXiv Detail & Related papers (2024-02-25T16:57:51Z) - Coherent Feed Forward Quantum Neural Network [2.1178416840822027]
Quantum machine learning, focusing on quantum neural networks (QNNs), remains a vastly uncharted field of study.
We introduce a bona fide QNN model, which seamlessly aligns with the versatility of a traditional FFNN in terms of its adaptable intermediate layers and nodes.
We test our proposed model on various benchmarking datasets such as the diagnostic breast cancer (Wisconsin) and credit card fraud detection datasets.
arXiv Detail & Related papers (2024-02-01T15:13:26Z) - Approximately Equivariant Quantum Neural Network for $p4m$ Group
Symmetries in Images [30.01160824817612]
This work proposes equivariant Quantum Convolutional Neural Networks (EquivQCNNs) for image classification under planar $p4m$ symmetry.
We present the results tested in different use cases, such as phase detection of the 2D Ising model and classification of the extended MNIST dataset.
arXiv Detail & Related papers (2023-10-03T18:01:02Z) - Classical-to-Quantum Transfer Learning Facilitates Machine Learning with Variational Quantum Circuit [62.55763504085508]
We prove that a classical-to-quantum transfer learning architecture using a Variational Quantum Circuit (VQC) improves the representation and generalization (estimation error) capabilities of the VQC model.
We show that the architecture of classical-to-quantum transfer learning leverages pre-trained classical generative AI models, making it easier to find the optimal parameters for the VQC in the training stage.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - Trainability barriers and opportunities in quantum generative modeling [0.0]
We investigate the barriers to the trainability of quantum generative models.
We show that using implicit generative models with explicit losses leads to a new flavour of barren plateau.
We propose a new local quantum fidelity-type loss which, by leveraging quantum circuits, is both faithful and enjoys trainability guarantees.
arXiv Detail & Related papers (2023-05-04T14:45:02Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Vertical Layering of Quantized Neural Networks for Heterogeneous
Inference [57.42762335081385]
We study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one.
We can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model.
arXiv Detail & Related papers (2022-12-10T15:57:38Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - A universal duplication-free quantum neural network [0.8399688944263843]
We propose a new QNN model that harbors without the need of multiple state-duplications.
We find that our model requires significantly fewer qubits and it outperforms the other two in terms of accuracy and relative error.
arXiv Detail & Related papers (2021-06-24T17:45:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.