Application of quantum neural network model to a multivariate regression
problem
- URL: http://arxiv.org/abs/2310.12559v1
- Date: Thu, 19 Oct 2023 08:10:12 GMT
- Title: Application of quantum neural network model to a multivariate regression
problem
- Authors: Hirotoshi Hirai
- Abstract summary: This study investigates the effect of the size of the training data on generalization performance.
The results indicate that QNN is particularly effective when the size of training data is small.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since the introduction of the quantum neural network model, it has been
widely studied due to its strong expressive power and robustness to
overfitting. To date, the model has been evaluated primarily in classification
tasks, but its performance in practical multivariate regression problems has
not been thoroughly examined. In this study, the Auto-MPG data set (392 valid
data points, excluding missing data, on fuel efficiency for various vehicles)
was used to construct QNN models and investigate the effect of the size of the
training data on generalization performance. The results indicate that QNN is
particularly effective when the size of training data is small, suggesting that
it is especially suitable for small-data problems such as those encountered in
Materials Informatics.
Related papers
- Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - Amortised Inference in Bayesian Neural Networks [0.0]
We introduce the Amortised Pseudo-Observation Variational Inference Bayesian Neural Network (APOVI-BNN)
We show that the amortised inference is of similar or better quality to those obtained through traditional variational inference.
We then discuss how the APOVI-BNN may be viewed as a new member of the neural process family.
arXiv Detail & Related papers (2023-09-06T14:02:33Z) - Do Neural Topic Models Really Need Dropout? Analysis of the Effect of
Dropout in Topic Modeling [0.6445605125467573]
Dropout is a widely used regularization trick to resolve the overfitting issue in large feedforward neural networks trained on a small dataset.
We have analyzed the consequences of dropout in the encoder as well as in the decoder of the VAE architecture in three widely used neural topic models.
arXiv Detail & Related papers (2023-03-28T13:45:39Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Kronecker Factorization for Preventing Catastrophic Forgetting in
Large-scale Medical Entity Linking [7.723047334864811]
In the medical domain, sequential training on tasks may sometimes be the only way to train models.
catastrophic forgetting, i.e., a substantial drop in accuracy on prior tasks when a model is updated for a new task.
We show the effectiveness of this technique on the important and illustrative task of medical entity linking across three datasets.
arXiv Detail & Related papers (2021-11-11T01:51:01Z) - Improving Classifier Training Efficiency for Automatic Cyberbullying
Detection with Feature Density [58.64907136562178]
We study the effectiveness of Feature Density (FD) using different linguistically-backed feature preprocessing methods.
We hypothesise that estimating dataset complexity allows for the reduction of the number of required experiments.
The difference in linguistic complexity of datasets allows us to additionally discuss the efficacy of linguistically-backed word preprocessing.
arXiv Detail & Related papers (2021-11-02T15:48:28Z) - Simulated Data Generation Through Algorithmic Force Coefficient
Estimation for AI-Based Robotic Projectile Launch Modeling [7.434188351403889]
We introduce a new framework for algorithmic estimation of force coefficients for non-rigid object launching.
We implement a novel training algorithm and objective for our deep neural network to accurately model launch trajectory of non-rigid objects.
arXiv Detail & Related papers (2021-05-09T18:47:45Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - An Efficient Method of Training Small Models for Regression Problems
with Knowledge Distillation [1.433758865948252]
We propose a new formalism of knowledge distillation for regression problems.
First, we propose a new loss function, teacher outlier loss rejection, which rejects outliers in training samples using teacher model predictions.
By considering the multi-task network, training of the feature extraction of student models becomes more effective.
arXiv Detail & Related papers (2020-02-28T08:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.