Quantum computer based Feature Selection in Machine Learning
- URL: http://arxiv.org/abs/2306.10591v1
- Date: Sun, 18 Jun 2023 15:58:34 GMT
- Title: Quantum computer based Feature Selection in Machine Learning
- Authors: Gerhard Hellstern, Vanessa Dehn, Martin Zaefferer
- Abstract summary: We treat the feature selection task as a quadratic unconstrained optimization problem (QUBO)
We compare the different results in small-sized problem setups.
Due to persisting error rates, the classical optimization methods are still superior.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem of selecting an appropriate number of features in supervised
learning problems is investigated in this paper. Starting with common methods
in machine learning, we treat the feature selection task as a quadratic
unconstrained optimization problem (QUBO), which can be tackled with classical
numerical methods as well as within a quantum computing framework. We compare
the different results in small-sized problem setups. According to the results
of our study, whether the QUBO method outperforms other feature selection
methods depends on the data set. In an extension to a larger data set with 27
features, we compare the convergence behavior of the QUBO methods via quantum
computing with classical stochastic optimization methods. Due to persisting
error rates, the classical stochastic optimization methods are still superior.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Contextual Stochastic Bilevel Optimization [50.36775806399861]
We introduce contextual bilevel optimization (CSBO) -- a bilevel optimization framework with the lower-level problem minimizing an expectation on some contextual information and the upper-level variable.
It is important for applications such as meta-learning, personalized learning, end-to-end learning, and Wasserstein distributionally robustly optimization with side information (WDRO-SI)
arXiv Detail & Related papers (2023-10-27T23:24:37Z) - Bilevel Optimization for Feature Selection in the Data-Driven Newsvendor
Problem [8.281391209717105]
We study the feature-based news vendor problem, in which a decision-maker has access to historical data.
In this setting, we investigate feature selection, aiming to derive sparse, explainable models with improved out-of-sample performance.
We present a mixed integer linear program reformulation for the bilevel program, which can be solved to optimality with standard optimization solvers.
arXiv Detail & Related papers (2022-09-12T08:52:26Z) - Variational quantum algorithm for unconstrained black box binary
optimization: Application to feature selection [1.9182522142368683]
We introduce a variational quantum algorithm to solve unconstrained black box binary problems.
This is in contrast to the typical setting of quantum algorithms for optimization.
We show that the quantum method produces competitive and in certain aspects even better performance than traditional feature selection techniques.
arXiv Detail & Related papers (2022-05-06T07:02:15Z) - Quantum Feature Selection [2.5934039615414615]
In machine learning, fewer features reduce model complexity.
We propose a novel feature selection algorithm based on a quadratic unconstrained binary optimization problem.
In contrast to iterative or greedy methods, our direct approach yields higherquality solutions.
arXiv Detail & Related papers (2022-03-24T16:22:25Z) - Post-selection inference with HSIC-Lasso [19.928884107444908]
We propose a selective inference procedure using the framework of truncated Gaussians combined with the polyhedral lemma.
We then develop an algorithm, which allows for low computational costs and provides a selection of the regularisation parameter.
The performance of our method is illustrated by both artificial and real-world data based experiments, which emphasise a tight control of the type-I error, even for small sample sizes.
arXiv Detail & Related papers (2020-10-29T15:10:21Z) - A novel embedded min-max approach for feature selection in nonlinear
support vector machine classification [0.0]
We propose an embedded feature selection method based on a min-max optimization problem.
By leveraging duality theory, we equivalently reformulate the min-max problem and solve it without further ado.
The efficiency and usefulness of our approach are tested on several benchmark data sets.
arXiv Detail & Related papers (2020-04-21T09:40:38Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.