Hybrid quantum ResNet for car classification and its hyperparameter
optimization
- URL: http://arxiv.org/abs/2205.04878v2
- Date: Fri, 29 Sep 2023 17:53:34 GMT
- Title: Hybrid quantum ResNet for car classification and its hyperparameter
optimization
- Authors: Asel Sagingalieva, Mo Kordzanganeh, Andrii Kurkin, Artem Melnikov,
Daniil Kuhmistrov, Michael Perelshtein, Alexey Melnikov, Andrea Skolik, David
Von Dollen
- Abstract summary: This paper presents a quantum-inspired hyperparameter optimization technique and a hybrid quantum-classical machine learning model for supervised learning.
We test our approaches in a car image classification task and demonstrate a full-scale implementation of the hybrid quantum ResNet model.
A classification accuracy of 0.97 was obtained by the hybrid model after 18 iterations, whereas the classical model achieved an accuracy of 0.92 after 75 iterations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image recognition is one of the primary applications of machine learning
algorithms. Nevertheless, machine learning models used in modern image
recognition systems consist of millions of parameters that usually require
significant computational time to be adjusted. Moreover, adjustment of model
hyperparameters leads to additional overhead. Because of this, new developments
in machine learning models and hyperparameter optimization techniques are
required. This paper presents a quantum-inspired hyperparameter optimization
technique and a hybrid quantum-classical machine learning model for supervised
learning. We benchmark our hyperparameter optimization method over standard
black-box objective functions and observe performance improvements in the form
of reduced expected run times and fitness in response to the growth in the size
of the search space. We test our approaches in a car image classification task
and demonstrate a full-scale implementation of the hybrid quantum ResNet model
with the tensor train hyperparameter optimization. Our tests show a qualitative
and quantitative advantage over the corresponding standard classical tabular
grid search approach used with a deep neural network ResNet34. A classification
accuracy of 0.97 was obtained by the hybrid model after 18 iterations, whereas
the classical model achieved an accuracy of 0.92 after 75 iterations.
Related papers
- Towards Efficient Quantum Hybrid Diffusion Models [68.43405413443175]
We propose a new methodology to design quantum hybrid diffusion models.
We propose two possible hybridization schemes combining quantum computing's superior generalization with classical networks' modularity.
arXiv Detail & Related papers (2024-02-25T16:57:51Z) - Model Performance Prediction for Hyperparameter Optimization of Deep
Learning Models Using High Performance Computing and Quantum Annealing [0.0]
We show that integrating model performance prediction with early stopping methods holds great potential to speed up the HPO process of deep learning models.
We propose a novel algorithm called Swift-Hyperband that can use either classical or quantum support vector regression for performance prediction.
arXiv Detail & Related papers (2023-11-29T10:32:40Z) - FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search [50.07268323597872]
We propose the first one-shot mixed-precision quantization search that eliminates the need for retraining in both integer and low-precision floating point models.
With integer models, we increase the accuracy of ResNet-18 on ImageNet by 1.31% and ResNet-50 by 0.90% with equivalent model cost over previous methods.
For the first time, we explore a novel mixed-precision floating-point search and improve MobileNetV2 by up to 0.98% compared to prior state-of-the-art FP8 models.
arXiv Detail & Related papers (2023-08-07T04:17:19Z) - Systematic Architectural Design of Scale Transformed Attention Condenser
DNNs via Multi-Scale Class Representational Response Similarity Analysis [93.0013343535411]
We propose a novel type of analysis called Multi-Scale Class Representational Response Similarity Analysis (ClassRepSim)
We show that adding STAC modules to ResNet style architectures can result in up to a 1.6% increase in top-1 accuracy.
Results from ClassRepSim analysis can be used to select an effective parameterization of the STAC module resulting in competitive performance.
arXiv Detail & Related papers (2023-06-16T18:29:26Z) - Quantum machine learning for image classification [39.58317527488534]
This research introduces two quantum machine learning models that leverage the principles of quantum mechanics for effective computations.
Our first model, a hybrid quantum neural network with parallel quantum circuits, enables the execution of computations even in the noisy intermediate-scale quantum era.
A second model introduces a hybrid quantum neural network with a Quanvolutional layer, reducing image resolution via a convolution process.
arXiv Detail & Related papers (2023-04-18T18:23:20Z) - Quantum Machine Learning hyperparameter search [0.0]
A benchmark of models trained on a dataset related to a forecast problem in the airline industry is evaluated.
Our approach outperforms traditional hyperparameter optimization methods in terms of accuracy and convergence speed for the given search space.
Our study provides a new direction for future research in quantum-based machine learning hyperparameter optimization.
arXiv Detail & Related papers (2023-02-20T20:41:31Z) - AdaGrid: Adaptive Grid Search for Link Prediction Training Objective [58.79804082133998]
Training objective crucially influences the model's performance and generalization capabilities.
We propose Adaptive Grid Search (AdaGrid) which dynamically adjusts the edge message ratio during training.
We show that AdaGrid can boost the performance of the models up to $1.9%$ while being nine times more time-efficient than a complete search.
arXiv Detail & Related papers (2022-03-30T09:24:17Z) - Towards Robust and Automatic Hyper-Parameter Tunning [39.04604349338802]
We introduce a new class of HPO method and explore how the low-rank factorization of intermediate layers of a convolutional network can be used to define an analytical response surface.
We quantify how this surface behaves as a surrogate to model performance and can be solved using a trust-region search algorithm, which we call autoHyper.
arXiv Detail & Related papers (2021-11-28T05:27:34Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - Hyperboost: Hyperparameter Optimization by Gradient Boosting surrogate
models [0.4079265319364249]
Current state-of-the-art methods leverage Random Forests or Gaussian processes to build a surrogate model.
We propose a new surrogate model based on gradient boosting.
We demonstrate empirically that the new method is able to outperform some state-of-the art techniques across a reasonable sized set of classification problems.
arXiv Detail & Related papers (2021-01-06T22:07:19Z) - Highly Efficient Salient Object Detection with 100K Parameters [137.74898755102387]
We propose a flexible convolutional module, namely generalized OctConv (gOctConv), to efficiently utilize both in-stage and cross-stages multi-scale features.
We build an extremely light-weighted model, namely CSNet, which achieves comparable performance with about 0.2% (100k) of large models on popular object detection benchmarks.
arXiv Detail & Related papers (2020-03-12T07:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.