Decohering Tensor Network Quantum Machine Learning Models
- URL: http://arxiv.org/abs/2209.01195v1
- Date: Fri, 2 Sep 2022 17:46:50 GMT
- Title: Decohering Tensor Network Quantum Machine Learning Models
- Authors: Haoran Liao, Ian Convy, Zhibo Yang, K. Birgitta Whaley
- Abstract summary: We investigate the competition between decoherence and adding ancillas on the classification performance of two models.
We present numerical evidence that the fully-decohered unitary tree tensor network (TTN) with two ancillas performs at least as well as the non-decohered unitary TTN.
- Score: 6.312362367148171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor network quantum machine learning (QML) models are promising
applications on near-term quantum hardware. While decoherence of qubits is
expected to decrease the performance of QML models, it is unclear to what
extent the diminished performance can be compensated for by adding ancillas to
the models and accordingly increasing the virtual bond dimension of the models.
We investigate here the competition between decoherence and adding ancillas on
the classification performance of two models, with an analysis of the
decoherence effect from the perspective of regression. We present numerical
evidence that the fully-decohered unitary tree tensor network (TTN) with two
ancillas performs at least as well as the non-decohered unitary TTN, suggesting
that it is beneficial to add at least two ancillas to the unitary TTN
regardless of the amount of decoherence may be consequently introduced.
Related papers
- Solving the Hubbard model with Neural Quantum States [66.55653324211542]
We study the state-of-the-art results for the doped two-dimensional (2D) Hubbard model.<n>We find different attention heads in the NQS ansatz can directly encode correlations at different scales.<n>Our work establishes NQS as a powerful tool for solving challenging many-fermions systems.
arXiv Detail & Related papers (2025-07-03T14:08:25Z) - Probing Quantum Spin Systems with Kolmogorov-Arnold Neural Network Quantum States [0.0]
We propose textttSineKAN, a neural network model to represent quantum mechanical wave functions.<n>We find that textttSineKAN models can be trained to high precisions and accuracies with minimal computational costs.
arXiv Detail & Related papers (2025-06-02T17:18:40Z) - A Quantum Neural Network Transfer-Learning Model for Forecasting Problems with Continuous and Discrete Variables [0.0]
This study introduces simple yet effective continuous- and discrete-variable quantum neural network (QNN) models as a transfer-learning approach for forecasting tasks.
The CV-QNN features a single quantum layer with two qubits to establish entanglement and utilizes a minimal set of quantum gates.
The model's frozen parameters are successfully applied to various forecasting tasks, including energy consumption, traffic flow, weather conditions, and cryptocurrency price prediction.
arXiv Detail & Related papers (2025-03-04T22:38:51Z) - Multi-Level Collaboration in Model Merging [56.31088116526825]
This paper explores the intrinsic connections between model merging and model ensembling.
We find that even when previous restrictions are not met, there is still a way for model merging to attain a near-identical and superior performance similar to that of ensembling.
arXiv Detail & Related papers (2025-03-03T07:45:04Z) - Quantum-Train with Tensor Network Mapping Model and Distributed Circuit Ansatz [0.8192907805418583]
Quantum-Train (QT) is a hybrid quantum-classical machine learning framework.
It maps quantum state measurements to classical neural network weights.
Traditional QT framework employs a multi-layer perceptron (MLP) for this task, but it struggles with scalability and interpretability.
We introduce a distributed circuit ansatz designed for large-scale quantum machine learning with multiple small quantum processing unit nodes.
arXiv Detail & Related papers (2024-09-11T03:51:34Z) - Quantum-Train: Rethinking Hybrid Quantum-Classical Machine Learning in the Model Compression Perspective [7.7063925534143705]
We introduce the Quantum-Train(QT) framework, a novel approach that integrates quantum computing with machine learning algorithms.
QT achieves remarkable results by employing a quantum neural network alongside a classical mapping model.
arXiv Detail & Related papers (2024-05-18T14:35:57Z) - Towards Efficient Quantum Hybrid Diffusion Models [68.43405413443175]
We propose a new methodology to design quantum hybrid diffusion models.
We propose two possible hybridization schemes combining quantum computing's superior generalization with classical networks' modularity.
arXiv Detail & Related papers (2024-02-25T16:57:51Z) - Coherent Feed Forward Quantum Neural Network [2.1178416840822027]
Quantum machine learning, focusing on quantum neural networks (QNNs), remains a vastly uncharted field of study.
We introduce a bona fide QNN model, which seamlessly aligns with the versatility of a traditional FFNN in terms of its adaptable intermediate layers and nodes.
We test our proposed model on various benchmarking datasets such as the diagnostic breast cancer (Wisconsin) and credit card fraud detection datasets.
arXiv Detail & Related papers (2024-02-01T15:13:26Z) - Approximately Equivariant Quantum Neural Network for $p4m$ Group
Symmetries in Images [30.01160824817612]
This work proposes equivariant Quantum Convolutional Neural Networks (EquivQCNNs) for image classification under planar $p4m$ symmetry.
We present the results tested in different use cases, such as phase detection of the 2D Ising model and classification of the extended MNIST dataset.
arXiv Detail & Related papers (2023-10-03T18:01:02Z) - Pre-training Tensor-Train Networks Facilitates Machine Learning with Variational Quantum Circuits [70.97518416003358]
Variational quantum circuits (VQCs) hold promise for quantum machine learning on noisy intermediate-scale quantum (NISQ) devices.
While tensor-train networks (TTNs) can enhance VQC representation and generalization, the resulting hybrid model, TTN-VQC, faces optimization challenges due to the Polyak-Lojasiewicz (PL) condition.
To mitigate this challenge, we introduce Pre+TTN-VQC, a pre-trained TTN model combined with a VQC.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Vertical Layering of Quantized Neural Networks for Heterogeneous
Inference [57.42762335081385]
We study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one.
We can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model.
arXiv Detail & Related papers (2022-12-10T15:57:38Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - A universal duplication-free quantum neural network [0.8399688944263843]
We propose a new QNN model that harbors without the need of multiple state-duplications.
We find that our model requires significantly fewer qubits and it outperforms the other two in terms of accuracy and relative error.
arXiv Detail & Related papers (2021-06-24T17:45:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.