AutoQML: Automatic Generation and Training of Robust Quantum-Inspired
Classifiers by Using Genetic Algorithms on Grayscale Images
- URL: http://arxiv.org/abs/2208.13246v1
- Date: Sun, 28 Aug 2022 16:33:48 GMT
- Title: AutoQML: Automatic Generation and Training of Robust Quantum-Inspired
Classifiers by Using Genetic Algorithms on Grayscale Images
- Authors: Sergio Altares-L\'opez, Juan Jos\'e Garc\'ia-Ripoll, Angela Ribeiro
- Abstract summary: We propose a new hybrid system for automatically generating and training quantum-inspired classifiers on grayscale images.
We define a dynamic fitness function to obtain the smallest possible circuit and highest accuracy on unseen data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new hybrid system for automatically generating and training
quantum-inspired classifiers on grayscale images by using multiobjective
genetic algorithms. We define a dynamic fitness function to obtain the smallest
possible circuit and highest accuracy on unseen data, ensuring that the
proposed technique is generalizable and robust. We minimize the complexity of
the generated circuits in terms of the number of entanglement gates by
penalizing their appearance. We reduce the size of the images with two
dimensionality reduction approaches: principal component analysis (PCA), which
is encoded in the individual for optimization purpose, and a small
convolutional autoencoder (CAE). These two methods are compared with one
another and with a classical nonlinear approach to understand their behaviors
and to ensure that the classification ability is due to the quantum circuit and
not the preprocessing technique used for dimensionality reduction.
Related papers
- Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Scalable quantum dynamics compilation via quantum machine learning [7.31922231703204]
variational quantum compilation (VQC) methods employ variational optimization to reduce gate costs while maintaining high accuracy.
We show that our approach exceeds state-of-the-art compilation results in both system size and accuracy in one dimension ($1$D)
For the first time, we extend VQC to systems on two-dimensional (2D) strips with a quasi-1D treatment, demonstrating a significant resource advantage over standard Trotterization methods.
arXiv Detail & Related papers (2024-09-24T18:00:00Z) - Enhancing the performance of Variational Quantum Classifiers with hybrid autoencoders [0.0]
We propose an alternative method which reduces the dimensionality of a given dataset by taking into account the specific quantum embedding that comes after.
This method aspires to make quantum machine learning with VQCs more versatile and effective on datasets of high dimension.
arXiv Detail & Related papers (2024-09-05T08:51:20Z) - Quantum Circuit Optimization using Differentiable Programming of Tensor Network States [0.0]
The said algorithm runs on classical hardware and finds shallow, accurate quantum circuits.
All circuits achieve high state fidelities within reasonable CPU time and modest memory requirements.
arXiv Detail & Related papers (2024-08-22T17:48:53Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - Fundamental Limits of Two-layer Autoencoders, and Achieving Them with
Gradient Methods [91.54785981649228]
This paper focuses on non-linear two-layer autoencoders trained in the challenging proportional regime.
Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods.
For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders.
arXiv Detail & Related papers (2022-12-27T12:37:34Z) - Generating quantum feature maps for SVM classifier [0.0]
We present and compare two methods of generating quantum feature maps for quantum-enhanced support vector machine.
The first method is a genetic algorithm with multi-objective fitness function using penalty method, which incorporates maximizing the accuracy of classification.
The second method uses variational quantum circuit, focusing on how to contruct the ansatz based on unitary matrix decomposition.
arXiv Detail & Related papers (2022-07-23T07:28:23Z) - Learning Representations for CSI Adaptive Quantization and Feedback [51.14360605938647]
We propose an efficient method for adaptive quantization and feedback in frequency division duplexing systems.
Existing works mainly focus on the implementation of autoencoder (AE) neural networks for CSI compression.
We recommend two different methods: one based on a post training quantization and the second one in which the codebook is found during the training of the AE.
arXiv Detail & Related papers (2022-07-13T08:52:13Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Automatic design of quantum feature maps [0.3867363075280543]
We propose a new technique for the automatic generation of optimal ad-hoc ans"atze for classification by using quantum support vector machine (QSVM)
This efficient method is based on NSGA-II multiobjective genetic algorithms which allow both maximize the accuracy and minimize the ansatz size.
arXiv Detail & Related papers (2021-05-26T15:31:10Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.