AQMLator -- An Auto Quantum Machine Learning E-Platform
- URL: http://arxiv.org/abs/2409.18338v3
- Date: Mon, 7 Oct 2024 09:20:59 GMT
- Title: AQMLator -- An Auto Quantum Machine Learning E-Platform
- Authors: Tomasz Rybotycki, Piotr Gawron,
- Abstract summary: AQMLator aims to automatically propose and train the quantum layers of an ML model with minimal input from the user.
It uses standard ML libraries, making it easy to introduce into existing ML pipelines.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A successful Machine Learning (ML) model implementation requires three main components: training dataset, suitable model architecture and training procedure. Given dataset and task, finding an appropriate model might be challenging. AutoML, a branch of ML, focuses on automatic architecture search -- a meta method that aims at moving human from ML system design process. The success of ML and the development of quantum computing (QC) in recent years led to a birth of new fascinating field called Quantum Machine Learning (QML) that, amongst others, incorporates quantum computers into ML models. In this paper we present AQMLator, an Auto Quantum Machine Learning platform that aims to automatically propose and train the quantum layers of an ML model with minimal input from the user. This way, data scientists can bypass the entry barrier for QC and use QML. AQMLator uses standard ML libraries, making it easy to introduce into existing ML pipelines.
Related papers
- Quantum Machine Learning: An Interplay Between Quantum Computing and Machine Learning [54.80832749095356]
Quantum machine learning (QML) is a rapidly growing field that combines quantum computing principles with traditional machine learning.
This paper introduces quantum computing for the machine learning paradigm, where variational quantum circuits are used to develop QML architectures.
arXiv Detail & Related papers (2024-11-14T12:27:50Z) - Quantum Machine Learning Architecture Search via Deep Reinforcement Learning [8.546707309430593]
We introduce deep reinforcement learning to explore proficient QML model architectures tailored for supervised learning tasks.
Our methodology involves training an RL agent to devise policies that facilitate the discovery of QML models without predetermined ansatz.
Our proposed method successfully identifies VQC architectures capable of achieving high classification accuracy while minimizing gate depth.
arXiv Detail & Related papers (2024-07-29T16:20:51Z) - Verbalized Machine Learning: Revisiting Machine Learning with Language Models [63.10391314749408]
We introduce the framework of verbalized machine learning (VML)
VML constrains the parameter space to be human-interpretable natural language.
We empirically verify the effectiveness of VML, and hope that VML can serve as a stepping stone to stronger interpretability.
arXiv Detail & Related papers (2024-06-06T17:59:56Z) - Position: A Call to Action for a Human-Centered AutoML Paradigm [83.78883610871867]
Automated machine learning (AutoML) was formed around the fundamental objectives of automatically and efficiently configuring machine learning (ML)
We argue that a key to unlocking AutoML's full potential lies in addressing the currently underexplored aspect of user interaction with AutoML systems.
arXiv Detail & Related papers (2024-06-05T15:05:24Z) - Feature Importance and Explainability in Quantum Machine Learning [0.0]
Many Machine Learning (ML) models are referred to as black box models, providing no real insights into why a prediction is made.
This article explores feature importance and explainability in Quantum Machine Learning (QML) compared to Classical ML models.
arXiv Detail & Related papers (2024-05-14T19:12:32Z) - TeD-Q: a tensor network enhanced distributed hybrid quantum machine
learning framework [59.07246314484875]
TeD-Q is an open-source software framework for quantum machine learning.
It seamlessly integrates classical machine learning libraries with quantum simulators.
It provides a graphical mode in which the quantum circuit and the training progress can be visualized in real-time.
arXiv Detail & Related papers (2023-01-13T09:35:05Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - DeePKS+ABACUS as a Bridge between Expensive Quantum Mechanical Models
and Machine Learning Potentials [9.982820888454958]
Deep Kohn-Sham (DeePKS) is a machine learning (ML) potential based on density functional theory (DFT)
DeePKS offers closely-matched energies and forces compared with high-level quantum mechanical (QM) method.
One can generate a decent amount of high-accuracy QM data to train a DeePKS model, and then use the DeePKS model to label a much larger amount of configurations to train a ML potential.
arXiv Detail & Related papers (2022-06-21T03:24:18Z) - Study of Feature Importance for Quantum Machine Learning Models [0.0]
Predictor importance is a crucial part of data preprocessing pipelines in classical and quantum machine learning (QML)
This work presents the first study of its kind in which feature importance for QML models has been explored and contrasted against their classical machine learning (CML) equivalents.
We developed a hybrid quantum-classical architecture where QML models are trained and feature importance values are calculated from classical algorithms on a real-world dataset.
arXiv Detail & Related papers (2022-02-18T15:21:47Z) - Towards AutoQML: A Cloud-Based Automated Circuit Architecture Search
Framework [0.0]
We take the first steps towards Automated Quantum Machine Learning (AutoQML)
We propose a concrete description of the problem, and then develop a classical-quantum hybrid cloud architecture.
As an application use-case, we train a quantum Geneversarative Adrial neural Network (qGAN) to generate energy prices that follow a known historic data distribution.
arXiv Detail & Related papers (2022-02-16T12:37:10Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.