Adiabatic Encoding of Pre-trained MPS Classifiers into Quantum Circuits
- URL: http://arxiv.org/abs/2504.09250v1
- Date: Sat, 12 Apr 2025 15:12:46 GMT
- Title: Adiabatic Encoding of Pre-trained MPS Classifiers into Quantum Circuits
- Authors: Keisuke Murota,
- Abstract summary: We propose a framework that encodes pre-trained MPS-classifiers into quantum MPS circuits with postselection, and gradually removes the postselection while retaining performance.<n>We prove that training qMPS-classifiers from scratch on a certain artificial dataset is exponentially hard due to barren plateaus, but our adiabatic encoding circumvents this issue.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although Quantum Neural Networks (QNNs) offer powerful methods for classification tasks, the training of QNNs faces two major training obstacles: barren plateaus and local minima. A promising solution is to first train a tensor-network (TN) model classically and then embed it into a QNN.\ However, embedding TN-classifiers into quantum circuits generally requires postselection whose success probability may decay exponentially with the system size. We propose an \emph{adiabatic encoding} framework that encodes pre-trained MPS-classifiers into quantum MPS (qMPS) circuits with postselection, and gradually removes the postselection while retaining performance. We prove that training qMPS-classifiers from scratch on a certain artificial dataset is exponentially hard due to barren plateaus, but our adiabatic encoding circumvents this issue. Additional numerical experiments on binary MNIST also confirm its robustness.
Related papers
- Trainable Quantum Neural Network for Multiclass Image Classification with the Power of Pre-trained Tree Tensor Networks [0.0]
Tree tensor networks (TTNs) offer powerful models for image classification.
embedding TTNs into quantum neural networks (QNNs) may further improve the performance by leveraging quantum resources.
We propose forest tensor network (FTN)-classifiers, which aggregate multiple small-bond-dimension TTNs.
arXiv Detail & Related papers (2025-04-21T09:51:39Z) - Neural quantum embedding via deterministic quantum computation with one qubit [3.360317485898423]
We propose a neural quantum embedding (NQE) technique based on deterministic quantum computation with one qubit (DQC1)<n>NQE trains a neural network to maximize the trace distance between quantum states corresponding to different categories of classical data.<n>We show that the NQE-DQC1 protocol is extendable, enabling the use of the NMR system for NQE training.
arXiv Detail & Related papers (2025-01-26T01:33:46Z) - Projected Stochastic Gradient Descent with Quantum Annealed Binary Gradients [51.82488018573326]
We present QP-SBGD, a novel layer-wise optimiser tailored towards training neural networks with binary weights.
BNNs reduce the computational requirements and energy consumption of deep learning models with minimal loss in accuracy.
Our algorithm is implemented layer-wise, making it suitable to train larger networks on resource-limited quantum hardware.
arXiv Detail & Related papers (2023-10-23T17:32:38Z) - A Post-Training Approach for Mitigating Overfitting in Quantum
Convolutional Neural Networks [0.24578723416255752]
We study post-training approaches for mitigating overfitting in Quantum convolutional neural network (QCNN)
We find that a straightforward adaptation of a classical post-training method, known as neuron dropout, to the quantum setting leads to a substantial decrease in success probability of the QCNN.
We argue that this effect exposes the crucial role of entanglement in QCNNs and the vulnerability of QCNNs to entanglement loss.
arXiv Detail & Related papers (2023-09-04T21:46:24Z) - Pre-training Tensor-Train Networks Facilitates Machine Learning with Variational Quantum Circuits [70.97518416003358]
Variational quantum circuits (VQCs) hold promise for quantum machine learning on noisy intermediate-scale quantum (NISQ) devices.
While tensor-train networks (TTNs) can enhance VQC representation and generalization, the resulting hybrid model, TTN-VQC, faces optimization challenges due to the Polyak-Lojasiewicz (PL) condition.
To mitigate this challenge, we introduce Pre+TTN-VQC, a pre-trained TTN model combined with a VQC.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - FV-Train: Quantum Convolutional Neural Network Training with a Finite
Number of Qubits by Extracting Diverse Features [12.261689483681145]
As convolutional filters in QCNN extract intrinsic feature using quantum-based ansatz, it should use only finite number of qubits to prevent barren plateaus.
We propose a novel QCNN training algorithm to optimize feature extraction while using only a finite number of qubits.
arXiv Detail & Related papers (2022-09-19T02:53:33Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - Hybrid quantum-classical classifier based on tensor network and
variational quantum circuit [0.0]
We introduce a hybrid model combining the quantum-inspired tensor networks (TN) and the variational quantum circuits (VQC) to perform supervised learning tasks.
We show that a matrix product state based TN with low bond dimensions performs better than PCA as a feature extractor to compress data for the input of VQCs in the binary classification of MNIST dataset.
arXiv Detail & Related papers (2020-11-30T09:43:59Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.