Toward Quantum Machine Translation of Syntactically Distinct Languages
- URL: http://arxiv.org/abs/2307.16576v1
- Date: Mon, 31 Jul 2023 11:24:54 GMT
- Title: Toward Quantum Machine Translation of Syntactically Distinct Languages
- Authors: Mina Abbaszade, Mariam Zomorodi, Vahid Salari, Philip Kurian
- Abstract summary: We explore the feasibility of language translation using quantum natural language processing algorithms on noisy intermediate-scale quantum (NISQ) devices.
We employ Shannon entropy to demonstrate the significant role of some appropriate angles of rotation gates in the performance of parametrized quantum circuits.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The present study aims to explore the feasibility of language translation
using quantum natural language processing algorithms on noisy
intermediate-scale quantum (NISQ) devices. Classical methods in natural
language processing (NLP) struggle with handling large-scale computations
required for complex language tasks, but quantum NLP on NISQ devices holds
promise in harnessing quantum parallelism and entanglement to efficiently
process and analyze vast amounts of linguistic data, potentially
revolutionizing NLP applications. Our research endeavors to pave the way for
quantum neural machine translation, which could potentially offer advantages
over classical methods in the future. We employ Shannon entropy to demonstrate
the significant role of some appropriate angles of rotation gates in the
performance of parametrized quantum circuits. In particular, we utilize these
angles (parameters) as a means of communication between quantum circuits of
different languages. To achieve our objective, we adopt the encoder-decoder
model of classical neural networks and implement the translation task using
long short-term memory (LSTM). Our experiments involved 160 samples comprising
English sentences and their Persian translations. We trained the models with
different optimisers implementing stochastic gradient descent (SGD) as primary
and subsequently incorporating two additional optimizers in conjunction with
SGD. Notably, we achieved optimal results-with mean absolute error of 0.03,
mean squared error of 0.002, and 0.016 loss-by training the best model,
consisting of two LSTM layers and using the Adam optimiser. Our small dataset,
though consisting of simple synonymous sentences with word-to-word mappings,
points to the utility of Shannon entropy as a figure of merit in more complex
machine translation models for intricate sentence structures.
Related papers
- Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning [73.73967342609603]
We introduce a predictor-corrector learning framework to minimize truncation errors.
We also propose an exponential moving average-based coefficient learning method to strengthen our higher-order predictor.
Our model surpasses a robust 3.8B DeepNet by an average of 2.9 SacreBLEU, using only 1/3 parameters.
arXiv Detail & Related papers (2024-11-05T12:26:25Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Training-efficient density quantum machine learning [2.918930150557355]
Quantum machine learning requires powerful, flexible and efficiently trainable models.
We present density quantum neural networks, a learning model incorporating randomisation over a set of trainable unitaries.
arXiv Detail & Related papers (2024-05-30T16:40:28Z) - Unifying (Quantum) Statistical and Parametrized (Quantum) Algorithms [65.268245109828]
We take inspiration from Kearns' SQ oracle and Valiant's weak evaluation oracle.
We introduce an extensive yet intuitive framework that yields unconditional lower bounds for learning from evaluation queries.
arXiv Detail & Related papers (2023-10-26T18:23:21Z) - Optimizing Quantum Federated Learning Based on Federated Quantum Natural
Gradient Descent [17.05322956052278]
We propose an efficient optimization algorithm, namely federated quantum natural descent (FQNGD)
Compared with gradient descent methods like Adam and Adagrad, the FQNGD algorithm admits much fewer training for the QFL to get converged.
Our experiments on a handwritten digit classification dataset justify the effectiveness of the FQNGD for the QFL framework.
arXiv Detail & Related papers (2023-02-27T11:34:16Z) - Near-Term Advances in Quantum Natural Language Processing [0.03298597939573778]
This paper describes experiments showing that some tasks in natural language processing can already be performed using quantum computers.
The first uses an explicit word-based approach, in which word-topic scoring weights are implemented as fractional rotations of individual qubit.
A new phrase is classified based on the accumulation of these weights in a scoring qubit using entangling controlled-NOT gates.
arXiv Detail & Related papers (2022-06-05T13:10:46Z) - When BERT Meets Quantum Temporal Convolution Learning for Text
Classification in Heterogeneous Computing [75.75419308975746]
This work proposes a vertical federated learning architecture based on variational quantum circuits to demonstrate the competitive performance of a quantum-enhanced pre-trained BERT model for text classification.
Our experiments on intent classification show that our proposed BERT-QTC model attains competitive experimental results in the Snips and ATIS spoken language datasets.
arXiv Detail & Related papers (2022-02-17T09:55:21Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - QNLP in Practice: Running Compositional Models of Meaning on a Quantum
Computer [0.7194733565949804]
We present results on the first NLP experiments conducted on Noisy Intermediate-Scale Quantum (NISQ) computers.
We create representations for sentences that have a natural mapping to quantum circuits.
We successfully train NLP models that solve simple sentence classification tasks on quantum hardware.
arXiv Detail & Related papers (2021-02-25T13:37:33Z) - Grammar-Aware Question-Answering on Quantum Computers [0.17205106391379021]
We perform the first implementation of an NLP task on noisy intermediate-scale quantum (NISQ) hardware.
We encode word-meanings in quantum states and we explicitly account for grammatical structure.
Our novel QNLP model shows concrete promise for scalability as the quality of the quantum hardware improves.
arXiv Detail & Related papers (2020-12-07T14:49:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.