Online Submodular Maximization via Online Convex Optimization
- URL: http://arxiv.org/abs/2309.04339v4
- Date: Mon, 8 Jan 2024 00:16:49 GMT
- Title: Online Submodular Maximization via Online Convex Optimization
- Authors: Tareq Si Salem, G\"ozde \"Ozcan, Iasonas Nikolaou, Evimaria Terzi,
Stratis Ioannidis
- Abstract summary: We prove that a large class of submodular functions, namely, potential functions, reduces to threshold convex optimization (OCO)
We show that our reduction extends to many different versions of the online learning problem, including dynamic regret, bandit, and optimistic-learning settings.
- Score: 11.989282746688064
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study monotone submodular maximization under general matroid constraints
in the online setting. We prove that online optimization of a large class of
submodular functions, namely, weighted threshold potential functions, reduces
to online convex optimization (OCO). This is precisely because functions in
this class admit a concave relaxation; as a result, OCO policies, coupled with
an appropriate rounding scheme, can be used to achieve sublinear regret in the
combinatorial setting. We show that our reduction extends to many different
versions of the online learning problem, including the dynamic regret, bandit,
and optimistic-learning settings.
Related papers
- Upper-Linearizability of Online Non-Monotone DR-Submodular Maximization over Down-Closed Convex Sets [40.61353645255313]
We study online projection of non-monotone Diminishing-Return(DR)-submodular functions over down-closed convex functions.<n>Our main contribution is a new structural result showing that this class is $1/e$linear-izable under carefully designed exponential reparametrization.<n>As a result, we obtain $O(T1/2)$ static regret with a single query per round and unlock adaptive and dynamic regret guarantees.
arXiv Detail & Related papers (2026-02-24T05:59:42Z) - $γ$-weakly $θ$-up-concavity: Linearizable Non-Convex Optimization with Applications to DR-Submodular and OSS Functions [52.031993908548294]
We introduce and $$-weakly $$-up-concavity, a novel first-orderizing condition that characterizes a broad approximation of such functions.<n>Our framework recovers the optimal coefficient for DR-submodular class and significantly improves existing approximation coefficients.
arXiv Detail & Related papers (2026-02-13T22:34:44Z) - Decentralized Projection-free Online Upper-Linearizable Optimization with Applications to DR-Submodular Optimization [29.705337940879705]
We introduce a novel framework for decentralized projection-free optimization.
We leverage decentralized optimization techniques with the flexibility of upper-linearizable function frameworks.
arXiv Detail & Related papers (2025-01-30T07:28:34Z) - Gradient-Variation Online Learning under Generalized Smoothness [56.38427425920781]
gradient-variation online learning aims to achieve regret guarantees that scale with variations in gradients of online functions.
Recent efforts in neural network optimization suggest a generalized smoothness condition, allowing smoothness to correlate with gradient norms.
We provide the applications for fast-rate convergence in games and extended adversarial optimization.
arXiv Detail & Related papers (2024-08-17T02:22:08Z) - From Linear to Linearizable Optimization: A Novel Framework with Applications to Stationary and Non-stationary DR-submodular Optimization [33.38582292895673]
This paper introduces the notion of concavity and DR-submodularity in various settings, including monotone non-linear and DR-submodularity.
A general meta-algorithmm converts linear/quadratic into ones that optimize upper-linear/quadratizable functions.
arXiv Detail & Related papers (2024-04-27T06:19:30Z) - Bridging the Gap Between General and Down-Closed Convex Sets in
Submodular Maximization [8.225819874406238]
We show that Mualem citemualem23re shows that this approach cannot smooth between down- and non-down-closed constraints.
In this work, we suggest novel offline and online algorithms based on a natural decomposition of the body into two distinct convex bodies.
We also empirically demonstrate the superiority of our proposed algorithms across three offline and two online applications.
arXiv Detail & Related papers (2024-01-17T14:56:42Z) - Meta-Learning Adversarial Bandit Algorithms [55.72892209124227]
We study online meta-learning with bandit feedback.
We learn to tune online mirror descent generalization (OMD) with self-concordant barrier regularizers.
arXiv Detail & Related papers (2023-07-05T13:52:10Z) - Online Dynamic Submodular Optimization [0.0]
We propose new algorithms with provable performance for online binary optimization.
We numerically test our algorithms in two power system applications: fast-timescale demand response and real-time distribution network reconfiguration.
arXiv Detail & Related papers (2023-06-19T10:37:15Z) - Universal Online Optimization in Dynamic Environments via Uniclass
Prediction [0.0]
We propose a novel and intuitive framework for universal online optimization in dynamic environments.
Our strategy does not rely on the construction of a set of experts and an accompanying meta-algorithm.
This is the first paper proposing a universal approach with state-of-the-art dynamic regret guarantees even for general convex cost functions.
arXiv Detail & Related papers (2023-02-13T03:00:45Z) - Online Nonsubmodular Minimization with Delayed Costs: From Full
Information to Bandit Feedback [98.7678704343537]
We focus on a class of nonsubmodular functions with special structure, and prove regret guarantees for several variants of the online and approximate online bandit gradient descent algorithms.
We derive bounds for the agent's regret in the full information and bandit feedback setting, even if the delay between choosing a decision and receiving the incurred cost is unbounded.
arXiv Detail & Related papers (2022-05-15T08:27:12Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - Universal Online Convex Optimization Meets Second-order Bounds [74.0120666722487]
We propose a simple strategy for universal online convex optimization.
The key idea is to construct a set of experts to process the original online functions, and deploy a meta-algorithm over the linearized losses.
In this way, we can plug in off-the-shelf online solvers as black-box experts to deliver problem-dependent regret bounds.
arXiv Detail & Related papers (2021-05-08T11:43:49Z) - Boosting for Online Convex Optimization [64.15578413206715]
We consider the decision-making framework of online convex optimization with a large number of experts.
We define a weak learning algorithm as a mechanism that guarantees approximate regret against a base class of experts.
We give an efficient boosting algorithm that guarantees near-optimal regret against the convex hull of the base class.
arXiv Detail & Related papers (2021-02-18T12:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.