Application of Quantum Tensor Networks for Protein Classification
- URL: http://arxiv.org/abs/2403.06890v1
- Date: Mon, 11 Mar 2024 16:47:09 GMT
- Title: Application of Quantum Tensor Networks for Protein Classification
- Authors: Debarshi Kundu, Archisman Ghosh, Srinivasan Ekambaram, Jian Wang,
Nikolay Dokholyan, Swaroop Ghosh
- Abstract summary: We show that protein sequences can be thought of as sentences in natural language processing.
We classify proteins based on their subcellular locations.
We demonstrate that Quantum Networks (QTN) can effectively handle the complexity and diversity of protein sequences.
- Score: 3.5300092061072523
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We show that protein sequences can be thought of as sentences in natural
language processing and can be parsed using the existing Quantum Natural
Language framework into parameterized quantum circuits of reasonable qubits,
which can be trained to solve various protein-related machine-learning
problems. We classify proteins based on their subcellular locations, a pivotal
task in bioinformatics that is key to understanding biological processes and
disease mechanisms. Leveraging the quantum-enhanced processing capabilities, we
demonstrate that Quantum Tensor Networks (QTN) can effectively handle the
complexity and diversity of protein sequences. We present a detailed
methodology that adapts QTN architectures to the nuanced requirements of
protein data, supported by comprehensive experimental results. We demonstrate
two distinct QTNs, inspired by classical recurrent neural networks (RNN) and
convolutional neural networks (CNN), to solve the binary classification task
mentioned above. Our top-performing quantum model has achieved a 94% accuracy
rate, which is comparable to the performance of a classical model that uses the
ESM2 protein language model embeddings. It's noteworthy that the ESM2 model is
extremely large, containing 8 million parameters in its smallest configuration,
whereas our best quantum model requires only around 800 parameters. We
demonstrate that these hybrid models exhibit promising performance, showcasing
their potential to compete with classical models of similar complexity.
Related papers
- Quantum Neural Network applications to Protein Binding Affinity Predictions [0.0]
Quantum neural networks (QNNs) have gained traction as a research focus.<n>This study proposes thirty variations of multilayer perceptron-based quantum neural networks.<n>Results indicate that the quantum models achieved approximately 20% higher accuracy on one unseen dataset.
arXiv Detail & Related papers (2025-08-05T13:47:15Z) - Quantum and Hybrid Machine-Learning Models for Materials-Science Tasks [0.0]
We design and estimate quantum machine learning and hybrid quantum-classical models.<n>We predict stacking fault energies and solutes that can ductilize magnesium.
arXiv Detail & Related papers (2025-07-10T20:29:16Z) - Leveraging Pre-Trained Neural Networks to Enhance Machine Learning with Variational Quantum Circuits [48.33631905972908]
We introduce an innovative approach that utilizes pre-trained neural networks to enhance Variational Quantum Circuits (VQC)
This technique effectively separates approximation error from qubit count and removes the need for restrictive conditions.
Our results extend to applications such as human genome analysis, demonstrating the broad applicability of our approach.
arXiv Detail & Related papers (2024-11-13T12:03:39Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Training-efficient density quantum machine learning [2.918930150557355]
Quantum machine learning requires powerful, flexible and efficiently trainable models.
We present density quantum neural networks, a learning model incorporating randomisation over a set of trainable unitaries.
arXiv Detail & Related papers (2024-05-30T16:40:28Z) - Multi-Scale Feature Fusion Quantum Depthwise Convolutional Neural Networks for Text Classification [3.0079490585515343]
We propose a novel quantum neural network (QNN) model based on quantum convolution.
We develop the quantum depthwise convolution that significantly reduces the number of parameters and lowers computational complexity.
We also introduce the multi-scale feature fusion mechanism to enhance model performance by integrating word-level and sentence-level features.
arXiv Detail & Related papers (2024-05-22T10:19:34Z) - Quantum machine learning for image classification [39.58317527488534]
This research introduces two quantum machine learning models that leverage the principles of quantum mechanics for effective computations.
Our first model, a hybrid quantum neural network with parallel quantum circuits, enables the execution of computations even in the noisy intermediate-scale quantum era.
A second model introduces a hybrid quantum neural network with a Quanvolutional layer, reducing image resolution via a convolution process.
arXiv Detail & Related papers (2023-04-18T18:23:20Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Quantum Self-Attention Neural Networks for Text Classification [8.975913540662441]
We propose a new simple network architecture, called the quantum self-attention neural network (QSANN)
We introduce the self-attention mechanism into quantum neural networks and then utilize a Gaussian projected quantum self-attention serving as a sensible quantum version of self-attention.
Our method exhibits robustness to low-level quantum noises and showcases resilience to quantum neural network architectures.
arXiv Detail & Related papers (2022-05-11T16:50:46Z) - When BERT Meets Quantum Temporal Convolution Learning for Text
Classification in Heterogeneous Computing [75.75419308975746]
This work proposes a vertical federated learning architecture based on variational quantum circuits to demonstrate the competitive performance of a quantum-enhanced pre-trained BERT model for text classification.
Our experiments on intent classification show that our proposed BERT-QTC model attains competitive experimental results in the Snips and ATIS spoken language datasets.
arXiv Detail & Related papers (2022-02-17T09:55:21Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Recurrent Quantum Neural Networks [7.6146285961466]
Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning.
We construct a quantum recurrent neural network (QRNN) with demonstrable performance on non-trivial tasks.
We evaluate the QRNN on MNIST classification, both by feeding the QRNN each image pixel-by-pixel; and by utilising modern data augmentation as preprocessing step.
arXiv Detail & Related papers (2020-06-25T17:59:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.