A Matrix Product State Model for Simultaneous Classification and Generation
- URL: http://arxiv.org/abs/2406.17441v1
- Date: Tue, 25 Jun 2024 10:23:36 GMT
- Title: A Matrix Product State Model for Simultaneous Classification and Generation
- Authors: Alex Mossi, Bojan Žunkovic, Kyriakos Flouris,
- Abstract summary: Quantum machine learning (QML) is a rapidly expanding field that merges the principles of quantum computing with the techniques of machine learning.
Here, we present a novel matrix product state (MPS) model, where the MPS functions as both a classifier and a generator.
Our contributions offer insights into the mechanics of tensor network methods for generation tasks.
- Score: 0.8192907805418583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum machine learning (QML) is a rapidly expanding field that merges the principles of quantum computing with the techniques of machine learning. One of the powerful mathematical frameworks in this domain is tensor networks. These networks are used to approximate high-order tensors by contracting tensors with lower ranks. Originally developed for simulating quantum systems, tensor networks have become integral to quantum computing and, by extension, to QML. Their ability to efficiently represent and manipulate complex, high-dimensional data makes them suitable for various machine learning tasks within the quantum realm. Here, we present a matrix product state (MPS) model, where the MPS functions as both a classifier and a generator. The dual functionality of this novel MPS model permits a strategy that enhances the traditional training of supervised MPS models. This framework is inspired by generative adversarial networks and is geared towards generating more realistic samples by reducing outliers. Additionally, our contributions offer insights into the mechanics of tensor network methods for generation tasks. Specifically, we discuss alternative embedding functions and a new sampling method from non-normalized MPSs.
Related papers
- Quantum Large Language Models via Tensor Network Disentanglers [0.0]
We propose a method to enhance the performance of Large Language Models (LLMs) by integrating quantum computing and quantum-inspired techniques.
Our approach involves replacing the weight matrices in the Self-Attention and Multi-layer Perceptron layers with a combination of two variational quantum circuits and a quantum-inspired tensor network.
arXiv Detail & Related papers (2024-10-22T20:12:04Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Quantum circuit synthesis with diffusion models [0.6554326244334868]
We use generative machine learning models, specifically denoising diffusion models (DMs), to facilitate this transformation.
We steer the model to produce desired quantum operations within gate-based quantum circuits.
We envision DMs as pivotal in quantum circuit synthesis, enhancing both practical applications but also insights into theoretical quantum computation.
arXiv Detail & Related papers (2023-11-03T17:17:08Z) - Sparse Modular Activation for Efficient Sequence Modeling [94.11125833685583]
Recent models combining Linear State Space Models with self-attention mechanisms have demonstrated impressive results across a range of sequence modeling tasks.
Current approaches apply attention modules statically and uniformly to all elements in the input sequences, leading to sub-optimal quality-efficiency trade-offs.
We introduce Sparse Modular Activation (SMA), a general mechanism enabling neural networks to sparsely activate sub-modules for sequence elements in a differentiable manner.
arXiv Detail & Related papers (2023-06-19T23:10:02Z) - TeD-Q: a tensor network enhanced distributed hybrid quantum machine
learning framework [59.07246314484875]
TeD-Q is an open-source software framework for quantum machine learning.
It seamlessly integrates classical machine learning libraries with quantum simulators.
It provides a graphical mode in which the quantum circuit and the training progress can be visualized in real-time.
arXiv Detail & Related papers (2023-01-13T09:35:05Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Tensor networks for unsupervised machine learning [9.897828174118974]
We present the Autoregressive Matrix Product States (AMPS), a tensor-network-based model combining the matrix product states from quantum many-body physics and the autoregressive models from machine learning.
We show that the proposed model significantly outperforms the existing tensor-network-based models and the restricted Boltzmann machines.
arXiv Detail & Related papers (2021-06-24T12:51:00Z) - Residual Matrix Product State for Machine Learning [20.158215120846652]
We propose the residual matrix product state (ResMPS) by combining the ideas of matrix product state (MPS) and residual neural network (NN)
ResMPS can be treated as a network where its layers map the "hidden" features to the outputs.
It outperforms state-of-the-art TN models on efficiency, stability and expression power.
arXiv Detail & Related papers (2020-12-22T05:44:20Z) - Generation of High-Resolution Handwritten Digits with an Ion-Trap
Quantum Computer [55.41644538483948]
We implement a quantum-circuit based generative model to learn and sample the prior distribution of a Generative Adversarial Network.
We train this hybrid algorithm on an ion-trap device based on $171$Yb$+$ ion qubits to generate high-quality images.
arXiv Detail & Related papers (2020-12-07T18:51:28Z) - Recurrent Quantum Neural Networks [7.6146285961466]
Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning.
We construct a quantum recurrent neural network (QRNN) with demonstrable performance on non-trivial tasks.
We evaluate the QRNN on MNIST classification, both by feeding the QRNN each image pixel-by-pixel; and by utilising modern data augmentation as preprocessing step.
arXiv Detail & Related papers (2020-06-25T17:59:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.