Semi-Bandit Learning for Monotone Stochastic Optimization
- URL: http://arxiv.org/abs/2312.15427v2
- Date: Wed, 13 Aug 2025 07:23:14 GMT
- Title: Semi-Bandit Learning for Monotone Stochastic Optimization
- Authors: Arpit Agarwal, Rohan Ghuge, Viswanath Nagarajan, Zhengjia Zhuo,
- Abstract summary: A generic online learning algorithm is developed for a class of ''monotone'' problems.<n>Our framework applies to several fundamental problems such as prophet inequality, Pandora's box, single-resource revenue management and posted pricing.
- Score: 16.921694787482213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stochastic optimization is a widely used approach for optimization under uncertainty, where uncertain input parameters are modeled by random variables. Exact or approximation algorithms have been obtained for several fundamental problems in this area. However, a significant limitation of this approach is that it requires full knowledge of the underlying probability distributions. Can we still get good (approximation) algorithms if these distributions are unknown, and the algorithm needs to learn them through repeated interactions? In this paper, we resolve this question for a large class of ''monotone'' stochastic problems, by providing a generic online learning algorithm with $\sqrt{T\log(T)}$ regret relative to the best approximation algorithm (under known distributions). Importantly, our online algorithm works in a semi-bandit setting, where in each period, the algorithm only observes samples from the random variables that were actually probed. Moreover, our result extends to settings with censored and binary feedback, where the policy only observes truncated or thresholded versions of the probed variables. Our framework applies to several fundamental problems such as prophet inequality, Pandora's box, stochastic knapsack, single-resource revenue management and sequential posted pricing.
Related papers
- Learning Mixture Models via Efficient High-dimensional Sparse Fourier Transforms [13.490333761774542]
We give a $rm poly(d,k)$ time and sample algorithm for efficiently learning the parameters of a mixture of $k$ spherical distributions in $d$ dimensions.<n>Our method succeeds whenever the cluster distributions have a characteristic function with sufficiently heavy tails.
arXiv Detail & Related papers (2026-01-08T17:47:58Z) - Stochastic Smoothed Primal-Dual Algorithms for Nonconvex Optimization with Linear Inequality Constraints [12.624604051853657]
We propose smoothed primal-dual algorithms for solving nonexact optimization problems with linear inequality constraints.
Our algorithms are single-loop iterations based on one gradient at each sample.
Unlike existing methods, our algorithms are free sub, large sizes or increasing parameters and use dual variable updates to ensure feasibility.
arXiv Detail & Related papers (2025-04-10T09:59:43Z) - Pathwise optimization for bridge-type estimators and its applications [49.1574468325115]
Pathwise methods allow to efficiently compute the full path for penalized estimators.<n>We apply these algorithms to the penalized estimation of processes observed at discrete times.
arXiv Detail & Related papers (2024-12-05T10:38:29Z) - Fully Zeroth-Order Bilevel Programming via Gaussian Smoothing [7.143879014059895]
We study and analyze zeroth-order approximation algorithms for solving bilvel problems.
To the best of our knowledge, this is the first time that sample bounds are established for a fully zeroth-order bilevel optimization algorithm.
arXiv Detail & Related papers (2024-03-29T21:12:25Z) - Adversarially Robust Distributed Count Tracking via Partial Differential
Privacy [17.43748116766233]
We study the distributed tracking model, also known as distributed functional monitoring.
This model involves $k$ sites each receiving a stream of items and communicating with the central server.
For count tracking, it is known that there is a $sqrtk$ gap in communication between deterministic and randomized algorithms.
arXiv Detail & Related papers (2023-11-01T07:42:13Z) - Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement
Learning: Adaptivity and Computational Efficiency [90.40062452292091]
We present the first computationally efficient algorithm for linear bandits with heteroscedastic noise.
Our algorithm is adaptive to the unknown variance of noise and achieves an $tildeO(d sqrtsum_k = 1K sigma_k2 + d)$ regret.
We also propose a variance-adaptive algorithm for linear mixture Markov decision processes (MDPs) in reinforcement learning.
arXiv Detail & Related papers (2023-02-21T00:17:24Z) - Bandit Algorithms for Prophet Inequality and Pandora's Box [13.709418181148148]
We study the Prophet Inequality and Pandora's Box problems in the Multi-Armed Bandits model.
Our results give near-optimal $tildeO(mathsfpoly(n)sqrtT)$ total regret algorithms for both Prophet Inequality and Pandora's Box.
arXiv Detail & Related papers (2022-11-16T00:10:35Z) - Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization
under Infinite Noise Variance [17.199063087458907]
We quantify the convergence rate of the Mirror Descent algorithm with a class of uniformly convex mirror maps.
This algorithm does not require any explicit gradient clipping or normalization.
We complement our results with information-theoretic lower bounds showing that no other algorithm using only first-order oracles can achieve improved rates.
arXiv Detail & Related papers (2022-02-23T17:08:40Z) - Regret Bound Balancing and Elimination for Model Selection in Bandits
and RL [34.15978290083909]
We propose a simple model selection approach for algorithms in bandit and reinforcement learning problems.
We prove that the total regret of this approach is bounded by the best valid candidate regret bound times a multiplicative factor.
Unlike recent efforts in model selection for linear bandits, our approach is versatile enough to also cover cases where the context information is generated by an adversarial environment.
arXiv Detail & Related papers (2020-12-24T00:53:42Z) - A Polynomial Time Algorithm for Learning Halfspaces with Tsybakov Noise [55.45544638410858]
We study the problem of PAC learning homogeneous halfspaces in the presence of Tsybakov noise.
Our algorithm learns the true halfspace within any accuracy $epsilon$.
arXiv Detail & Related papers (2020-10-04T22:19:06Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Model Selection in Contextual Stochastic Bandit Problems [51.94632035240787]
We develop a meta-algorithm that selects between base algorithms.
We show through a lower bound that even when one of the base algorithms has $O(sqrtT)$ regret, in general it is impossible to get better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2020-03-03T18:46:34Z) - Learning Sparse Classifiers: Continuous and Mixed Integer Optimization
Perspectives [10.291482850329892]
Mixed integer programming (MIP) can be used to solve (to optimality) $ell_0$-regularized regression problems.
We propose two classes of scalable algorithms: an exact algorithm that can handlepapprox 50,000$ features in a few minutes, and approximate algorithms that can address instances with $papprox6$.
In addition, we present new estimation error bounds for $ell$-regularizeds.
arXiv Detail & Related papers (2020-01-17T18:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.