Hybrid Quantum Neural Network in High-dimensional Data Classification
- URL: http://arxiv.org/abs/2312.01024v1
- Date: Sat, 2 Dec 2023 04:19:23 GMT
- Title: Hybrid Quantum Neural Network in High-dimensional Data Classification
- Authors: Hao-Yuan Chen, Yen-Jui Chang, Shih-Wei Liao, Ching-Ray Chang
- Abstract summary: We introduce a novel model architecture that combines classical convolutional layers with a quantum neural network.
The experiment is to classify high-dimensional audio data from the Bird-CLEF 2021 dataset.
- Score: 1.4801853435122907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The research explores the potential of quantum deep learning models to
address challenging machine learning problems that classical deep learning
models find difficult to tackle. We introduce a novel model architecture that
combines classical convolutional layers with a quantum neural network, aiming
to surpass state-of-the-art accuracy while maintaining a compact model size.
The experiment is to classify high-dimensional audio data from the Bird-CLEF
2021 dataset. Our evaluation focuses on key metrics, including training
duration, model accuracy, and total model size. This research demonstrates the
promising potential of quantum machine learning in enhancing machine learning
tasks and solving practical machine learning challenges available today.
Related papers
- Quantum-Train: Rethinking Hybrid Quantum-Classical Machine Learning in the Model Compression Perspective [7.7063925534143705]
We introduce the Quantum-Train(QT) framework, a novel approach that integrates quantum computing with machine learning algorithms.
QT achieves remarkable results by employing a quantum neural network alongside a classical mapping model.
arXiv Detail & Related papers (2024-05-18T14:35:57Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Bridging Classical and Quantum Machine Learning: Knowledge Transfer From
Classical to Quantum Neural Networks Using Knowledge Distillation [0.0]
This paper introduces a new method to transfer knowledge from classical to quantum neural networks using knowledge distillation.
We adapt classical convolutional neural network (CNN) architectures like LeNet and AlexNet to serve as teacher networks.
Quantum models achieve an average accuracy improvement of 0.80% on the MNIST dataset and 5.40% on the more complex Fashion MNIST dataset.
arXiv Detail & Related papers (2023-11-23T05:06:43Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Fitting a Collider in a Quantum Computer: Tackling the Challenges of
Quantum Machine Learning for Big Datasets [0.0]
Feature and data prototype selection techniques were studied to tackle this challenge.
A grid search was performed and quantum machine learning models were trained and benchmarked against classical shallow machine learning methods.
The performance of the quantum algorithms was found to be comparable to the classical ones, even when using large datasets.
arXiv Detail & Related papers (2022-11-06T22:45:37Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Quantum Self-Supervised Learning [22.953284192004034]
We propose a hybrid quantum-classical neural network architecture for contrastive self-supervised learning.
We apply our best quantum model to classify unseen images on the ibmq_paris quantum computer.
arXiv Detail & Related papers (2021-03-26T18:00:00Z) - Knowledge Distillation: A Survey [87.51063304509067]
Deep neural networks have been successful in both industry and academia, especially for computer vision tasks.
It is a challenge to deploy these cumbersome deep models on devices with limited resources.
Knowledge distillation effectively learns a small student model from a large teacher model.
arXiv Detail & Related papers (2020-06-09T21:47:17Z) - Quantum-inspired Machine Learning on high-energy physics data [0.0]
We apply a quantum-inspired machine learning technique to the analysis and classification of data produced by the Large Hadron Collider at CERN.
In particular, we present how to effectively classify so-called b-jets, jets originating from b-quarks from the proton-proton experiment, and how to interpret the classification results.
arXiv Detail & Related papers (2020-04-28T18:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.