Quantum algorithms for hedging and the learning of Ising models
- URL: http://arxiv.org/abs/2002.06003v2
- Date: Mon, 30 Nov 2020 11:23:44 GMT
- Title: Quantum algorithms for hedging and the learning of Ising models
- Authors: Patrick Rebentrost, Yassine Hamoudi, Maharshi Ray, Xin Wang, Siyi
Yang, Miklos Santha
- Abstract summary: A paradigmatic algorithm for online learning is the Hedge algorithm by Freund and Schapire.
This work presents quantum algorithms for such online learning in an oracular setting.
- Score: 6.346764987071066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A paradigmatic algorithm for online learning is the Hedge algorithm by Freund
and Schapire. An allocation into different strategies is chosen for multiple
rounds and each round incurs corresponding losses for each strategy. The
algorithm obtains a favorable guarantee for the total losses even in an
adversarial situation. This work presents quantum algorithms for such online
learning in an oracular setting. For $T$ time steps and $N$ strategies, we
exhibit run times of about $O \left ({\rm poly} (T) \sqrt{N} \right)$ for
estimating the losses and for betting on individual strategies by sampling. In
addition, we discuss a quantum analogue of the Sparsitron, a machine learning
algorithm based on the Hedge algorithm. The quantum algorithm inherits the
provable learning guarantees from the classical algorithm and exhibits
polynomial speedups. The speedups may find relevance in finance, for example
for hedging risks, and machine learning, for example for learning generalized
linear models or Ising models.
Related papers
- Meta-Learning from Learning Curves for Budget-Limited Algorithm Selection [11.409496019407067]
In a budget-limited scenario, it is crucial to carefully select an algorithm candidate and allocate a budget for training it.
We propose a novel framework in which an agent must select in the process of learning the most promising algorithm without waiting until it is fully trained.
arXiv Detail & Related papers (2024-10-10T08:09:58Z) - Constrained Online Two-stage Stochastic Optimization: Algorithm with
(and without) Predictions [19.537289123577022]
We consider an online two-stage optimization with long-term constraints over a finite horizon of $T$ periods.
We develop online algorithms for the online two-stage problem from adversarial learning algorithms.
arXiv Detail & Related papers (2024-01-02T07:46:33Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - The Bayesian Learning Rule [14.141964578853262]
We show that many machine-learning algorithms are specific instances of a single algorithm called the emphBayesian learning rule
The rule, derived from Bayesian principles, yields a wide-range of algorithms from fields such as optimization, deep learning, and graphical models.
arXiv Detail & Related papers (2021-07-09T17:28:55Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - An Algorithm for Fast Supervised Learning in Variational Circuits
through Simultaneous Processing of Multiple Samples [0.0]
We propose a novel algorithm for fast training of variational classifiers by processing multiple samples parallelly.
The presented algorithm utilizes qRAM and other quantum circuits in the forward pass.
Although we discuss only binary classification in the paper, the algorithm can be easily generalized to multi-class classification.
arXiv Detail & Related papers (2020-11-29T06:14:41Z) - A Polynomial Time Algorithm for Learning Halfspaces with Tsybakov Noise [55.45544638410858]
We study the problem of PAC learning homogeneous halfspaces in the presence of Tsybakov noise.
Our algorithm learns the true halfspace within any accuracy $epsilon$.
arXiv Detail & Related papers (2020-10-04T22:19:06Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.