AutoQML: Automatic Generation and Training of Robust Quantum-Inspired
Classifiers by Using Genetic Algorithms on Grayscale Images
- URL: http://arxiv.org/abs/2208.13246v1
- Date: Sun, 28 Aug 2022 16:33:48 GMT
- Title: AutoQML: Automatic Generation and Training of Robust Quantum-Inspired
Classifiers by Using Genetic Algorithms on Grayscale Images
- Authors: Sergio Altares-L\'opez, Juan Jos\'e Garc\'ia-Ripoll, Angela Ribeiro
- Abstract summary: We propose a new hybrid system for automatically generating and training quantum-inspired classifiers on grayscale images.
We define a dynamic fitness function to obtain the smallest possible circuit and highest accuracy on unseen data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new hybrid system for automatically generating and training
quantum-inspired classifiers on grayscale images by using multiobjective
genetic algorithms. We define a dynamic fitness function to obtain the smallest
possible circuit and highest accuracy on unseen data, ensuring that the
proposed technique is generalizable and robust. We minimize the complexity of
the generated circuits in terms of the number of entanglement gates by
penalizing their appearance. We reduce the size of the images with two
dimensionality reduction approaches: principal component analysis (PCA), which
is encoded in the individual for optimization purpose, and a small
convolutional autoencoder (CAE). These two methods are compared with one
another and with a classical nonlinear approach to understand their behaviors
and to ensure that the classification ability is due to the quantum circuit and
not the preprocessing technique used for dimensionality reduction.
Related papers
- Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - Fundamental Limits of Two-layer Autoencoders, and Achieving Them with
Gradient Methods [91.54785981649228]
This paper focuses on non-linear two-layer autoencoders trained in the challenging proportional regime.
Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods.
For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders.
arXiv Detail & Related papers (2022-12-27T12:37:34Z) - Generating quantum feature maps for SVM classifier [0.0]
We present and compare two methods of generating quantum feature maps for quantum-enhanced support vector machine.
The first method is a genetic algorithm with multi-objective fitness function using penalty method, which incorporates maximizing the accuracy of classification.
The second method uses variational quantum circuit, focusing on how to contruct the ansatz based on unitary matrix decomposition.
arXiv Detail & Related papers (2022-07-23T07:28:23Z) - Learning Representations for CSI Adaptive Quantization and Feedback [51.14360605938647]
We propose an efficient method for adaptive quantization and feedback in frequency division duplexing systems.
Existing works mainly focus on the implementation of autoencoder (AE) neural networks for CSI compression.
We recommend two different methods: one based on a post training quantization and the second one in which the codebook is found during the training of the AE.
arXiv Detail & Related papers (2022-07-13T08:52:13Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Generalized Learning Vector Quantization for Classification in
Randomized Neural Networks and Hyperdimensional Computing [4.4886210896619945]
We propose a modified RVFL network that avoids computationally expensive matrix operations during training.
The proposed approach achieved state-of-the-art accuracy on a collection of datasets from the UCI Machine Learning Repository.
arXiv Detail & Related papers (2021-06-17T21:17:17Z) - Automatic design of quantum feature maps [0.3867363075280543]
We propose a new technique for the automatic generation of optimal ad-hoc ans"atze for classification by using quantum support vector machine (QSVM)
This efficient method is based on NSGA-II multiobjective genetic algorithms which allow both maximize the accuracy and minimize the ansatz size.
arXiv Detail & Related papers (2021-05-26T15:31:10Z) - Quantized Proximal Averaging Network for Analysis Sparse Coding [23.080395291046408]
We unfold an iterative algorithm into a trainable network that facilitates learning sparsity prior to quantization.
We demonstrate applications to compressed image recovery and magnetic resonance image reconstruction.
arXiv Detail & Related papers (2021-05-13T12:05:35Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - CNN Acceleration by Low-rank Approximation with Quantized Factors [9.654865591431593]
The modern convolutional neural networks although achieve great results in solving complex computer vision tasks still cannot be effectively used in mobile and embedded devices.
In order to solve this problem the novel approach combining two known methods, low-rank tensor approximation in Tucker format and quantization of weights and feature maps (activations) is proposed.
The efficiency of our method is demonstrated for ResNet18 and ResNet34 on CIFAR-10, CIFAR-100 and Imagenet classification tasks.
arXiv Detail & Related papers (2020-06-16T02:28:05Z) - Discovering Representations for Black-box Optimization [73.59962178534361]
We show that black-box optimization encodings can be automatically learned, rather than hand designed.
We show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites.
arXiv Detail & Related papers (2020-03-09T20:06:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.