Transfer Learning Based Hybrid Quantum Neural Network Model for Surface Anomaly Detection
- URL: http://arxiv.org/abs/2409.00228v1
- Date: Fri, 30 Aug 2024 19:40:52 GMT
- Title: Transfer Learning Based Hybrid Quantum Neural Network Model for Surface Anomaly Detection
- Authors: Sounak Bhowmik, Himanshu Thapliyal,
- Abstract summary: This paper presents a quantum transfer learning (QTL) based approach to significantly reduce the number of parameters of the classical models.
We show that we could reduce the total number of trainable parameters up to 90% of the initial model without any drop in performance.
- Score: 0.4604003661048266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid increase in the volume of data increased the size and complexity of the deep learning models. These models are now more resource-intensive and time-consuming for training than ever. This paper presents a quantum transfer learning (QTL) based approach to significantly reduce the number of parameters of the classical models without compromising their performance, sometimes even improving it. Reducing the number of parameters reduces overfitting problems and training time and increases the models' flexibility and speed of response. For illustration, we have selected a surface anomaly detection problem to show that we can replace the resource-intensive and less flexible anomaly detection system (ADS) with a quantum transfer learning-based hybrid model to address the frequent emergence of new anomalies better. We showed that we could reduce the total number of trainable parameters up to 90% of the initial model without any drop in performance.
Related papers
- Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Emulation Learning for Neuromimetic Systems [0.0]
Building on our recent research on neural quantization systems, results on learning quantized motions and resilience to channel dropouts are reported.
We propose a general Deep Q Network (DQN) algorithm that can not only learn the trajectory but also exhibit the advantages of resilience to channel dropout.
arXiv Detail & Related papers (2023-05-04T22:47:39Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Quantized Adaptive Subgradient Algorithms and Their Applications [39.103587572626026]
We propose quantized composite mirror descent adaptive subgradient (QCMD adagrad) and quantized regularized dual average adaptive subgradient (QRDA adagrad) for distributed training.
A quantized gradient-based adaptive learning rate matrix is constructed to achieve a balance between communication costs, accuracy, and model sparsity.
arXiv Detail & Related papers (2022-08-11T04:04:03Z) - Multi-fidelity surrogate modeling using long short-term memory networks [0.0]
We introduce a novel data-driven framework of multi-fidelity surrogate modeling for parametrized, time-dependent problems.
We show that the proposed multi-fidelity LSTM networks not only improve single-fidelity regression significantly, but also outperform the multi-fidelity models based on feed-forward neural networks.
arXiv Detail & Related papers (2022-08-05T12:05:02Z) - Investigating the Relationship Between Dropout Regularization and Model
Complexity in Neural Networks [0.0]
Dropout Regularization serves to reduce variance in Deep Learning models.
We explore the relationship between the dropout rate and model complexity by training 2,000 neural networks.
We build neural networks that predict the optimal dropout rate given the number of hidden units in each dense layer.
arXiv Detail & Related papers (2021-08-14T23:49:33Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Adaptive Quantization of Model Updates for Communication-Efficient
Federated Learning [75.45968495410047]
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning.
Gradient quantization is an effective way of reducing the number of bits required to communicate each model update.
We propose an adaptive quantization strategy called AdaFL that aims to achieve communication efficiency as well as a low error floor.
arXiv Detail & Related papers (2021-02-08T19:14:21Z) - Neural networks with late-phase weights [66.72777753269658]
We show that the solutions found by SGD can be further improved by ensembling a subset of the weights in late stages of learning.
At the end of learning, we obtain back a single model by taking a spatial average in weight space.
arXiv Detail & Related papers (2020-07-25T13:23:37Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.