Quantum Generative Modeling of Sequential Data with Trainable Token
Embedding
- URL: http://arxiv.org/abs/2311.05050v1
- Date: Wed, 8 Nov 2023 22:56:37 GMT
- Title: Quantum Generative Modeling of Sequential Data with Trainable Token
Embedding
- Authors: Wanda Hou, Li Miao, Yi-Zhuang You
- Abstract summary: A quantum-inspired generative model known as the Born machines have shown great advancements in learning classical and quantum data.
We generalize the embedding method into trainable quantum measurement operators that can be simultaneously honed with MPS.
Our study indicated that combined with trainable embedding, Born machines can exhibit better performance and learn deeper correlations from the dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models are a class of machine learning models that aim to learn
the underlying probability distribution of data. Unlike discriminative models,
generative models focus on capturing the data's inherent structure, allowing
them to generate new samples that resemble the original data. To fully exploit
the potential of modeling probability distributions using quantum physics, a
quantum-inspired generative model known as the Born machines have shown great
advancements in learning classical and quantum data over matrix product
state(MPS) framework. The Born machines support tractable log-likelihood,
autoregressive and mask sampling, and have shown outstanding performance in
various unsupervised learning tasks. However, much of the current research has
been centered on improving the expressive power of MPS, predominantly embedding
each token directly by a corresponding tensor index. In this study, we
generalize the embedding method into trainable quantum measurement operators
that can be simultaneously honed with MPS. Our study indicated that combined
with trainable embedding, Born machines can exhibit better performance and
learn deeper correlations from the dataset.
Related papers
- Heat Death of Generative Models in Closed-Loop Learning [63.83608300361159]
We study the learning dynamics of generative models that are fed back their own produced content in addition to their original training dataset.
We show that, unless a sufficient amount of external data is introduced at each iteration, any non-trivial temperature leads the model to degenerate.
arXiv Detail & Related papers (2024-04-02T21:51:39Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Masked Particle Modeling on Sets: Towards Self-Supervised High Energy Physics Foundation Models [4.299997052226609]
Masked particle modeling (MPM) is a self-supervised method for learning generic, transferable, and reusable representations on unordered sets of inputs.
We study the efficacy of the method in samples of high energy jets at collider physics experiments.
arXiv Detail & Related papers (2024-01-24T15:46:32Z) - Generative Learning of Continuous Data by Tensor Networks [45.49160369119449]
We introduce a new family of tensor network generative models for continuous data.
We benchmark the performance of this model on several synthetic and real-world datasets.
Our methods give important theoretical and empirical evidence of the efficacy of quantum-inspired methods for the rapidly growing field of generative learning.
arXiv Detail & Related papers (2023-10-31T14:37:37Z) - Investigating the generative dynamics of energy-based neural networks [0.35911228556176483]
We study the generative dynamics of Restricted Boltzmann Machines (RBMs)
We show that the capacity to produce diverse data prototypes can be increased by initiating top-down sampling from chimera states.
We also found that the model is not capable of transitioning between all possible digit states within a single generation trajectory.
arXiv Detail & Related papers (2023-05-11T12:05:40Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Do Quantum Circuit Born Machines Generalize? [58.720142291102135]
We present the first work in the literature that presents the QCBM's generalization performance as an integral evaluation metric for quantum generative models.
We show that the QCBM is able to effectively learn the reweighted dataset and generate unseen samples with higher quality than those in the training set.
arXiv Detail & Related papers (2022-07-27T17:06:34Z) - Generative Quantum Machine Learning [0.0]
The aim of this thesis is to develop new generative quantum machine learning algorithms.
We introduce a quantum generative adversarial network and a quantum Boltzmann machine implementation, both of which can be realized with parameterized quantum circuits.
arXiv Detail & Related papers (2021-11-24T19:00:21Z) - Tensor networks for unsupervised machine learning [9.897828174118974]
We present the Autoregressive Matrix Product States (AMPS), a tensor-network-based model combining the matrix product states from quantum many-body physics and the autoregressive models from machine learning.
We show that the proposed model significantly outperforms the existing tensor-network-based models and the restricted Boltzmann machines.
arXiv Detail & Related papers (2021-06-24T12:51:00Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.