Design and Implementation of a Quantum Kernel for Natural Language
Processing
- URL: http://arxiv.org/abs/2205.06409v1
- Date: Fri, 13 May 2022 00:45:46 GMT
- Title: Design and Implementation of a Quantum Kernel for Natural Language
Processing
- Authors: Matt Wright
- Abstract summary: This thesis leverage the DisCoCat model to design a quantum-based kernel function that can be used by a support vector machine (SVM) for NLP tasks.
Two similarity measures were studied: (i) the transition amplitude approach and (ii) the SWAP test.
The explicit model from previous work was used to train word embeddings and achieved a testing accuracy of $93.09 pm 0.01$%.
- Score: 0.8702432681310401
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural language processing (NLP) is the field that attempts to make human
language accessible to computers, and it relies on applying a mathematical
model to express the meaning of symbolic language. One such model, DisCoCat,
defines how to express both the meaning of individual words as well as their
compositional nature. This model can be naturally implemented on quantum
computers, leading to the field quantum NLP (QNLP). Recent experimental work
used quantum machine learning techniques to map from text to class label using
the expectation value of the quantum encoded sentence. Theoretical work has
been done on computing the similarity of sentences but relies on an unrealized
quantum memory store. The main goal of this thesis is to leverage the DisCoCat
model to design a quantum-based kernel function that can be used by a support
vector machine (SVM) for NLP tasks. Two similarity measures were studied: (i)
the transition amplitude approach and (ii) the SWAP test. A simple NLP meaning
classification task from previous work was used to train the word embeddings
and evaluate the performance of both models. The Python module lambeq and its
related software stack was used for implementation. The explicit model from
previous work was used to train word embeddings and achieved a testing accuracy
of $93.09 \pm 0.01$%. It was shown that both the SVM variants achieved a higher
testing accuracy of $95.72 \pm 0.01$% for approach (i) and $97.14 \pm 0.01$%
for (ii). The SWAP test was then simulated under a noise model defined by the
real quantum device, ibmq_guadalupe. The explicit model achieved an accuracy of
$91.94 \pm 0.01$% while the SWAP test SVM achieved 96.7% on the testing
dataset, suggesting that the kernelized classifiers are resilient to noise.
These are encouraging results and motivate further investigations of our
proposed kernelized QNLP paradigm.
Related papers
- Extending Quantum Perceptrons: Rydberg Devices, Multi-Class Classification, and Error Tolerance [67.77677387243135]
Quantum Neuromorphic Computing (QNC) merges quantum computation with neural computation to create scalable, noise-resilient algorithms for quantum machine learning (QML)
At the core of QNC is the quantum perceptron (QP), which leverages the analog dynamics of interacting qubits to enable universal quantum computation.
arXiv Detail & Related papers (2024-11-13T23:56:20Z) - Toward Quantum Machine Translation of Syntactically Distinct Languages [0.0]
We explore the feasibility of language translation using quantum natural language processing algorithms on noisy intermediate-scale quantum (NISQ) devices.
We employ Shannon entropy to demonstrate the significant role of some appropriate angles of rotation gates in the performance of parametrized quantum circuits.
arXiv Detail & Related papers (2023-07-31T11:24:54Z) - Enhancing Self-Consistency and Performance of Pre-Trained Language
Models through Natural Language Inference [72.61732440246954]
Large pre-trained language models often lack logical consistency across test inputs.
We propose a framework, ConCoRD, for boosting the consistency and accuracy of pre-trained NLP models.
We show that ConCoRD consistently boosts accuracy and consistency of off-the-shelf closed-book QA and VQA models.
arXiv Detail & Related papers (2022-11-21T21:58:30Z) - Hierarchical Phrase-based Sequence-to-Sequence Learning [94.10257313923478]
We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.
Our approach trains two models: a discriminative derivation based on a bracketing grammar whose tree hierarchically aligns source and target phrases, and a neural seq2seq model that learns to translate the aligned phrases one-by-one.
arXiv Detail & Related papers (2022-11-15T05:22:40Z) - Validation tests of GBS quantum computers give evidence for quantum
advantage with a decoherent target [62.997667081978825]
We use positive-P phase-space simulations of grouped count probabilities as a fingerprint for verifying multi-mode data.
We show how one can disprove faked data, and apply this to a classical count algorithm.
arXiv Detail & Related papers (2022-11-07T12:00:45Z) - QNet: A Quantum-native Sequence Encoder Architecture [2.8099769011264586]
This work proposes QNet, a novel sequence encoder model that entirely inferences on the quantum computer using a minimum number of qubits.
In addition, we introduce ResQNet, a quantum-classical hybrid model composed of several QNet blocks linked by residual connections.
arXiv Detail & Related papers (2022-10-31T12:36:37Z) - A multiclass Q-NLP sentiment analysis experiment using DisCoCat [0.0]
We will tackle sentiment analysis in the Noisy Intermediate Scale Computing (NISQ) era, using the DisCoCat model of language.
We will first present the basics of quantum computing and the DisCoCat model.
This will enable us to define a general framework to perform NLP tasks on a quantum computer.
arXiv Detail & Related papers (2022-09-07T13:47:35Z) - Near-Term Advances in Quantum Natural Language Processing [0.03298597939573778]
This paper describes experiments showing that some tasks in natural language processing can already be performed using quantum computers.
The first uses an explicit word-based approach, in which word-topic scoring weights are implemented as fractional rotations of individual qubit.
A new phrase is classified based on the accumulation of these weights in a scoring qubit using entangling controlled-NOT gates.
arXiv Detail & Related papers (2022-06-05T13:10:46Z) - Quark: Controllable Text Generation with Reinforced Unlearning [68.07749519374089]
Large-scale language models often learn behaviors that are misaligned with user expectations.
We introduce Quantized Reward Konditioning (Quark), an algorithm for optimizing a reward function that quantifies an (un)wanted property.
For unlearning toxicity, negative sentiment, and repetition, our experiments show that Quark outperforms both strong baselines and state-of-the-art reinforcement learning methods.
arXiv Detail & Related papers (2022-05-26T21:11:51Z) - Differentiable Model Compression via Pseudo Quantization Noise [99.89011673907814]
We propose to add independent pseudo quantization noise to model parameters during training to approximate the effect of a quantization operator.
We experimentally verify that our method outperforms state-of-the-art quantization techniques on several benchmarks and architectures for image classification, language modeling, and audio source separation.
arXiv Detail & Related papers (2021-04-20T14:14:03Z) - QNLP in Practice: Running Compositional Models of Meaning on a Quantum
Computer [0.7194733565949804]
We present results on the first NLP experiments conducted on Noisy Intermediate-Scale Quantum (NISQ) computers.
We create representations for sentences that have a natural mapping to quantum circuits.
We successfully train NLP models that solve simple sentence classification tasks on quantum hardware.
arXiv Detail & Related papers (2021-02-25T13:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.