Practical application of quantum neural network to materials
informatics: prediction of the melting points of metal oxides
- URL: http://arxiv.org/abs/2310.17935v1
- Date: Fri, 27 Oct 2023 07:21:36 GMT
- Title: Practical application of quantum neural network to materials
informatics: prediction of the melting points of metal oxides
- Authors: Hirotoshi Hirai
- Abstract summary: Quantum neural network (QNN) models have received increasing attention owing to their strong expressibility and resistance to overfitting.
This study aims to construct a QNN model to predict the melting points of metal oxides.
Various architectures (encoding methods and entangler arrangements) are explored to create an effective QNN model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum neural network (QNN) models have received increasing attention owing
to their strong expressibility and resistance to overfitting. It is
particularly useful when the size of the training data is small, making it a
good fit for materials informatics (MI) problems. However, there are only a few
examples of the application of QNN to multivariate regression models, and
little is known about how these models are constructed. This study aims to
construct a QNN model to predict the melting points of metal oxides as an
example of a multivariate regression task for the MI problem. Different
architectures (encoding methods and entangler arrangements) are explored to
create an effective QNN model. Shallow-depth ansatzs could achieve sufficient
expressibility using sufficiently entangled circuits. The "linear" entangler
was adequate for providing the necessary entanglement. The expressibility of
the QNN model could be further improved by increasing the circuit width. The
generalization performance could also be improved, outperforming the classical
NN model. No overfitting was observed in the QNN models with a well-designed
encoder. These findings suggest that QNN can be a useful tool for MI.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Coherent Feed Forward Quantum Neural Network [2.1178416840822027]
Quantum machine learning, focusing on quantum neural networks (QNNs), remains a vastly uncharted field of study.
We introduce a bona fide QNN model, which seamlessly aligns with the versatility of a traditional FFNN in terms of its adaptable intermediate layers and nodes.
We test our proposed model on various benchmarking datasets such as the diagnostic breast cancer (Wisconsin) and credit card fraud detection datasets.
arXiv Detail & Related papers (2024-02-01T15:13:26Z) - A Post-Training Approach for Mitigating Overfitting in Quantum
Convolutional Neural Networks [0.24578723416255752]
We study post-training approaches for mitigating overfitting in Quantum convolutional neural network (QCNN)
We find that a straightforward adaptation of a classical post-training method, known as neuron dropout, to the quantum setting leads to a substantial decrease in success probability of the QCNN.
We argue that this effect exposes the crucial role of entanglement in QCNNs and the vulnerability of QCNNs to entanglement loss.
arXiv Detail & Related papers (2023-09-04T21:46:24Z) - Quantum Recurrent Neural Networks for Sequential Learning [11.133759363113867]
We propose a new kind of quantum recurrent neural network (QRNN) to find quantum advantageous applications in the near term.
Our QRNN is built by stacking the QRBs in a staggered way that can greatly reduce the algorithm's requirement with regard to the coherent time of quantum devices.
The numerical experiments show that our QRNN achieves much better performance in prediction (classification) accuracy against the classical RNN and state-of-the-art QNN models for sequential learning.
arXiv Detail & Related papers (2023-02-07T04:04:39Z) - Low-bit Quantization of Recurrent Neural Network Language Models Using
Alternating Direction Methods of Multipliers [67.688697838109]
This paper presents a novel method to train quantized RNNLMs from scratch using alternating direction methods of multipliers (ADMM)
Experiments on two tasks suggest the proposed ADMM quantization achieved a model size compression factor of up to 31 times over the full precision baseline RNNLMs.
arXiv Detail & Related papers (2021-11-29T09:30:06Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - A Tutorial on Quantum Convolutional Neural Networks (QCNN) [11.79760591464748]
Convolutional Neural Network (CNN) is a popular model in computer vision.
CNN is challenging to learn efficiently if the given dimension of data or model becomes too large.
Quantum Convolutional Neural Network (QCNN) provides a new solution to a problem to solve with CNN using a quantum computing environment.
arXiv Detail & Related papers (2020-09-20T12:29:05Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.