Communication-Efficient Decentralized Online Continuous DR-Submodular
Maximization
- URL: http://arxiv.org/abs/2208.08681v1
- Date: Thu, 18 Aug 2022 07:32:28 GMT
- Title: Communication-Efficient Decentralized Online Continuous DR-Submodular
Maximization
- Authors: Qixin Zhang, Zengde Deng, Xiangru Jian, Zaiyi Chen, Haoyuan Hu, Yu
Yang
- Abstract summary: We present two decentralized online algorithms for the monotone continuous DR-submodular-Frank problem.
The first one, One-shot Decentralized Meta-Wolfe (Mono-DMFW), achieves a $(1-1/e)$regret bound $O(T4/5)$.
Next, inspired by the non-oblivious boosting function citepzhang2022, we propose the Decentralized Online Boosting Gradient Ascent (DOBGA) algorithm.
- Score: 11.889570525184801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Maximizing a monotone submodular function is a fundamental task in machine
learning, economics, and statistics. In this paper, we present two
communication-efficient decentralized online algorithms for the monotone
continuous DR-submodular maximization problem, both of which reduce the number
of per-function gradient evaluations and per-round communication complexity
from $T^{3/2}$ to $1$. The first one, One-shot Decentralized Meta-Frank-Wolfe
(Mono-DMFW), achieves a $(1-1/e)$-regret bound of $O(T^{4/5})$. As far as we
know, this is the first one-shot and projection-free decentralized online
algorithm for monotone continuous DR-submodular maximization. Next, inspired by
the non-oblivious boosting function \citep{zhang2022boosting}, we propose the
Decentralized Online Boosting Gradient Ascent (DOBGA) algorithm, which attains
a $(1-1/e)$-regret of $O(\sqrt{T})$. To the best of our knowledge, this is the
first result to obtain the optimal $O(\sqrt{T})$ against a
$(1-1/e)$-approximation with only one gradient inquiry for each local objective
function per step. Finally, various experimental results confirm the
effectiveness of the proposed methods.
Related papers
- Federated Combinatorial Multi-Agent Multi-Armed Bandits [79.1700188160944]
This paper introduces a federated learning framework tailored for online optimization with bandit.
In this setting, agents subsets of arms, observe noisy rewards for these subsets without accessing individual arm information, and can cooperate and share information at specific intervals.
arXiv Detail & Related papers (2024-05-09T17:40:09Z) - Improved Projection-free Online Continuous Submodular Maximization [35.324719857218014]
We investigate the problem of online learning with monotone and continuous DR-submodular reward functions.
Previous studies have proposed an efficient projection-free algorithm called Mono-Frank-Wolfe (Mono-FW) using $O(T)$ gradient evaluations.
We propose an improved projection-free algorithm, namely POBGA, which reduces the regret bound to $O(T3/4)$ while keeping the same computational complexity.
arXiv Detail & Related papers (2023-05-29T02:54:31Z) - Online Learning for Non-monotone Submodular Maximization: From Full
Information to Bandit Feedback [12.914842850902456]
This paper revisits the online non-monotone continuous DR-submodular problem over a down-closed convex set.
We present the Meta-MFW algorithm achieving a $1/e$-regret bound of $O(sqrtT)$.
Next, we extend Mono-MFW to the bandit setting and propose the Bandit-MFW algorithm which attains a $1/e$-regret bound of $O(T8/9)$.
arXiv Detail & Related papers (2022-08-16T09:32:37Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Continuous Submodular Maximization: Boosting via Non-oblivious Function [12.755674710719616]
In this paper, we revisit the constrained and continuous submodular iterations in both offline and online settings.
We use the factorrevealing optimization equation to derive an optimal auxiliary function $F$ for problem $max_boldsymbolxinmathCf(boldsymbolx)$.
In the online setting, we propose boosting a gradient feedback algorithm, achieving a regret of $sqrtD$ (where $D$ is the sum of delays of gradient feedback against $(fracgamma2)$ adversarial.
arXiv Detail & Related papers (2022-01-03T15:10:17Z) - Faster First-Order Algorithms for Monotone Strongly DR-Submodular
Maximization [11.919431962501479]
Continuous-submodular functions satisfy the Diminishing Returns (DR) property, which implies that they are concave along non-negative directions.
We propose a new algorithm that matches the provably optimal $1-fracce$ approximation ratio after only $lceilfracLmurce$.
arXiv Detail & Related papers (2021-11-15T18:53:14Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - The Power of Subsampling in Submodular Maximization [51.629656762796564]
We show that this approach leads to optimal/state-of-the-art results despite being much simpler than existing methods.
We empirically demonstrate the effectiveness of our algorithms on video summarization, location summarization, and movie recommendation tasks.
arXiv Detail & Related papers (2021-04-06T20:25:57Z) - Nearly Minimax Optimal Reinforcement Learning for Linear Mixture Markov
Decision Processes [91.38793800392108]
We study reinforcement learning with linear function approximation where the underlying transition probability kernel of the Markov decision process (MDP) is a linear mixture model.
We propose a new, computationally efficient algorithm with linear function approximation named $textUCRL-VTR+$ for the aforementioned linear mixture MDPs.
To the best of our knowledge, these are the first computationally efficient, nearly minimax optimal algorithms for RL with linear function approximation.
arXiv Detail & Related papers (2020-12-15T18:56:46Z) - Regularized Submodular Maximization at Scale [45.914693923126826]
Submodularity is inherently related to the notions of diversity, coverage, and representativeness.
We propose methods for maximizing a regularized submodular function $f = g ell$ expressed as the difference between a determinant submodular function $g$ and a modular function $ell$.
arXiv Detail & Related papers (2020-02-10T02:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.