$k\texttt{-experts}$ -- Online Policies and Fundamental Limits
- URL: http://arxiv.org/abs/2110.07881v1
- Date: Fri, 15 Oct 2021 06:30:15 GMT
- Title: $k\texttt{-experts}$ -- Online Policies and Fundamental Limits
- Authors: Samrat Mukhopadhyay, Sourav Sahoo, Abhishek Sinha
- Abstract summary: This paper studies the $textttExperts$ problem.
The learner selects a subset of $k$ experts from a pool of $N$ experts at each round.
The reward obtained by the learner at any round depends on the rewards of the selected experts.
- Score: 8.84337023214151
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper introduces and studies the $k\texttt{-experts}$ problem -- a
generalization of the classic Prediction with Expert's Advice (i.e., the
$\texttt{Experts}$) problem. Unlike the $\texttt{Experts}$ problem, where the
learner chooses exactly one expert, in this problem, the learner selects a
subset of $k$ experts from a pool of $N$ experts at each round. The reward
obtained by the learner at any round depends on the rewards of the selected
experts. The $k\texttt{-experts}$ problem arises in many practical settings,
including online ad placements, personalized news recommendations, and paging.
Our primary goal is to design an online learning policy having a small regret.
In this pursuit, we propose $\texttt{SAGE}$ ($\textbf{Sa}$mpled
Hed$\textbf{ge}$) - a framework for designing efficient online learning
policies by leveraging statistical sampling techniques. We show that, for many
related problems, $\texttt{SAGE}$ improves upon the state-of-the-art bounds for
regret and computational complexity. Furthermore, going beyond the notion of
regret, we characterize the mistake bounds achievable by online learning
policies for a class of stable loss functions. We conclude the paper by
establishing a tight regret lower bound for a variant of the
$k\texttt{-experts}$ problem and carrying out experiments with standard
datasets.
Related papers
- Why Ask One When You Can Ask $k$? Two-Stage Learning-to-Defer to the Top-$k$ Experts [3.6787328174619254]
We introduce the first framework for Top-$k$ Learning-to-Defer, enabling systems to defer each query to the $k$ most cost-effective experts.<n>We propose Top-$k(x)$ Learning-to-Defer, an adaptive extension that learns the optimal number of experts per query based on input complexity, expert quality, and consultation cost.
arXiv Detail & Related papers (2025-04-17T14:50:40Z) - Regret Minimization for Piecewise Linear Rewards: Contracts, Auctions, and Beyond [37.92189925462977]
This paper introduces a general online learning framework to tackle regret minimization for piecewise linear rewards.
We show that our algorithm attains a regret of $widetildeO(sqrtnT)$, where $n$ is the number of pieces of the reward function and $T$ is the number of rounds.
Second, our algorithm demonstrates that, in the problem of learning to set prices in postedprice auctions, it is possible to attain suitable (and desirable) instance-independent regret bounds.
arXiv Detail & Related papers (2025-03-03T16:13:45Z) - Learning and Computation of $Φ$-Equilibria at the Frontier of Tractability [85.07238533644636]
$Phi$-equilibria is a powerful and flexible framework at the heart of online learning and game theory.
We show that an efficient online algorithm incurs average $Phi$-regret at most $epsilon$ using $textpoly(d, k)/epsilon2$ rounds.
We also show nearly matching lower bounds in the online setting, thereby obtaining for the first time a family of deviations that captures the learnability of $Phi$-regret.
arXiv Detail & Related papers (2025-02-25T19:08:26Z) - An Adversarial Analysis of Thompson Sampling for Full-information Online Learning: from Finite to Infinite Action Spaces [54.37047702755926]
We develop an analysis of Thompson sampling for online learning under full feedback.
We show regret decomposes into regret the learner expected a priori, plus a prior-robustness-type term we call excess regret.
arXiv Detail & Related papers (2025-02-20T18:10:12Z) - Learning to Cover: Online Learning and Optimization with Irreversible Decisions [50.5775508521174]
We define an online learning and optimization problem with discrete and irreversible decisions contributing toward a coverage target.<n>In each period, a decision-maker selects facilities to open, receives information on the success of each one, and updates a classification model to guide future decisions.<n>The goal is to minimize facility openings under a chance constraint reflecting the coverage target, in an horizon regime characterized by a large target number of facilities.
arXiv Detail & Related papers (2024-06-20T23:00:25Z) - No-Regret Online Prediction with Strategic Experts [16.54912614895861]
We study a generalization of the online binary prediction with expert advice framework where at each round, the learner is allowed to pick $mgeq 1$ experts from a pool of $K$ experts.
We focus on the setting in which experts act strategically and aim to maximize their influence on the algorithm's predictions by potentially misreporting their beliefs.
Our goal is to design algorithms that satisfy the following two requirements: 1) $textitIncentive-compatible$: Incentivize the experts to report their beliefs truthfully, and 2) $textitNo-regret$: Achieve
arXiv Detail & Related papers (2023-05-24T16:43:21Z) - Non-stationary Projection-free Online Learning with Dynamic and Adaptive
Regret Guarantees [36.746745619968024]
We investigate non-stationary projection-free online learning, and choose dynamic regret and adaptive regret to measure the performance.
Our results are the first general-case dynamic regret bounds for projection-free online learning, and can recover the existing $mathcalO(T3/4)$ static regret.
We propose a projection-free method to attain an $tildemathcalO(tau3/4)$ adaptive regret bound for any interval with length $tau.
arXiv Detail & Related papers (2023-05-19T15:02:10Z) - Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs [72.40181882916089]
We show that our algorithm achieves an $tildeObig((d+log (|mathcalS|2 |mathcalA|))sqrtKbig)$ regret with full-information feedback, where $d$ is the dimension of a known feature mapping linearly parametrizing the unknown transition kernel of the MDP, $K$ is the number of episodes, $|mathcalS|$ and $|mathcalA|$ are the cardinalities of the state and action spaces
arXiv Detail & Related papers (2023-05-15T05:37:32Z) - Streaming Algorithms for Learning with Experts: Deterministic Versus
Robust [62.98860182111096]
In the online learning with experts problem, an algorithm must make a prediction about an outcome on each of $T$ days (or times)
The goal is to make a prediction with the minimum cost, specifically compared to the best expert in the set.
We show a space lower bound of $widetildeOmegaleft(fracnMRTright)$ for any deterministic algorithm that achieves regret $R$ when the best expert makes $M$ mistakes.
arXiv Detail & Related papers (2023-03-03T04:39:53Z) - Private Online Prediction from Experts: Separations and Faster Rates [74.52487417350221]
Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints.
We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries.
arXiv Detail & Related papers (2022-10-24T18:40:19Z) - Efficient and Optimal Fixed-Time Regret with Two Experts [5.650647159993238]
Prediction with expert advice is a foundational problem in online learning.
In instances with $T$ rounds and $n$ experts, the classical Multiplicative Weights Update method suffers at most $sqrt(T/2)ln n$ regret when $T$ is known beforehand.
When the number of experts $n$ is small/fixed, algorithms with better regret guarantees exist.
arXiv Detail & Related papers (2022-03-15T01:07:09Z) - Online Selective Classification with Limited Feedback [82.68009460301585]
We study selective classification in the online learning model, wherein a predictor may abstain from classifying an instance.
Two salient aspects of the setting we consider are that the data may be non-realisable, due to which abstention may be a valid long-term action.
We construct simple versioning-based schemes for any $mu in (0,1],$ that make most $Tmu$ mistakes while incurring smash$tildeO(T1-mu)$ excess abstention against adaptive adversaries.
arXiv Detail & Related papers (2021-10-27T08:00:53Z) - Online Apprenticeship Learning [58.45089581278177]
In Apprenticeship Learning (AL), we are given a Markov Decision Process (MDP) without access to the cost function.
The goal is to find a policy that matches the expert's performance on some predefined set of cost functions.
We show that the OAL problem can be effectively solved by combining two mirror descent based no-regret algorithms.
arXiv Detail & Related papers (2021-02-13T12:57:51Z) - Nonstochastic Bandits with Infinitely Many Experts [1.7188280334580197]
We study the problem of nonstochastic bandits with infinitely many experts.
We propose a variant of Exp4.P that, for finitely many experts, enables inference of correct expert rankings.
We then incorporate the variant into a meta-algorithm that works on infinitely many experts.
arXiv Detail & Related papers (2021-02-09T22:42:36Z) - Stochastic Bandits with Linear Constraints [69.757694218456]
We study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies.
We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB)
arXiv Detail & Related papers (2020-06-17T22:32:19Z) - Toward Optimal Adversarial Policies in the Multiplicative Learning
System with a Malicious Expert [87.12201611818698]
We consider a learning system that combines experts' advice to predict a sequence of true outcomes.
It is assumed that one of the experts is malicious and aims to impose the maximum loss on the system.
We show that a simple greedy policy of always reporting false prediction is optimal with an approximation ratio of $1+O(sqrtfracln NN)$.
For the online setting where the malicious expert can adaptively make its decisions, we show that the optimal online policy can be efficiently computed by solving a dynamic program in $O(N3)$.
arXiv Detail & Related papers (2020-01-02T18:04:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.