Differentially Private Online Submodular Maximization
- URL: http://arxiv.org/abs/2010.12816v1
- Date: Sat, 24 Oct 2020 07:23:30 GMT
- Title: Differentially Private Online Submodular Maximization
- Authors: Sebastian Perez-Salazar, Rachel Cummings
- Abstract summary: We consider the problem of online submodular functions under a cardinality constraint with differential privacy feed (DP)
A stream of $T$ submodular functions over a common finite ground set $U$ arrives online, and at each time-step the decision maker must choose most $k$ elements of $U$ before observing the function.
The decision maker obtains a payoff equal to the function evaluated on the chosen set, and aims to learn a sequence of sets that achieves low expected regret.
- Score: 16.422215672356167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we consider the problem of online submodular maximization under
a cardinality constraint with differential privacy (DP). A stream of $T$
submodular functions over a common finite ground set $U$ arrives online, and at
each time-step the decision maker must choose at most $k$ elements of $U$
before observing the function. The decision maker obtains a payoff equal to the
function evaluated on the chosen set, and aims to learn a sequence of sets that
achieves low expected regret. In the full-information setting, we develop an
$(\varepsilon,\delta)$-DP algorithm with expected $(1-1/e)$-regret bound of
$\mathcal{O}\left( \frac{k^2\log |U|\sqrt{T \log k/\delta}}{\varepsilon}
\right)$. This algorithm contains $k$ ordered experts that learn the best
marginal increments for each item over the whole time horizon while maintaining
privacy of the functions. In the bandit setting, we provide an
$(\varepsilon,\delta+ O(e^{-T^{1/3}}))$-DP algorithm with expected
$(1-1/e)$-regret bound of $\mathcal{O}\left( \frac{\sqrt{\log
k/\delta}}{\varepsilon} (k (|U| \log |U|)^{1/3})^2 T^{2/3} \right)$. Our
algorithms contains $k$ ordered experts that learn the best marginal item to
select given the items chosen her predecessors, while maintaining privacy of
the functions. One challenge for privacy in this setting is that the payoff and
feedback of expert $i$ depends on the actions taken by her $i-1$ predecessors.
This particular type of information leakage is not covered by post-processing,
and new analysis is required. Our techniques for maintaining privacy with
feedforward may be of independent interest.
Related papers
- Minimax Optimal Submodular Optimization with Bandit Feedback [13.805872311596739]
We consider maximizing a monotonic, submodular set function $f: 2[n] rightarrow [0,1]$ under bandit feedback.
Specifically, $f$ is unknown to the learner but at each time $t=1,dots,T$ the learner chooses a set $S_t subset [n]$ with $|S_t| leq k$ and receives reward $f(S_t) + eta_t$ where $eta_t$ is mean-zero sub-Gaussian noise.
arXiv Detail & Related papers (2023-10-27T20:19:03Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Near-Optimal Algorithms for Private Online Optimization in the
Realizable Regime [74.52487417350221]
We consider online learning problems in the realizable setting, where there is a zero-loss solution.
We propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds.
arXiv Detail & Related papers (2023-02-27T21:19:24Z) - Private Online Prediction from Experts: Separations and Faster Rates [74.52487417350221]
Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints.
We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries.
arXiv Detail & Related papers (2022-10-24T18:40:19Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Faster Rates of Differentially Private Stochastic Convex Optimization [7.93728520583825]
We study the case where the population risk function satisfies the Tysbakov Noise Condition (TNC) with some parameter $theta>1$.
In the second part, we focus on a special case where the population risk function is strongly convex.
arXiv Detail & Related papers (2021-07-31T22:23:39Z) - Improved Regret Bounds for Online Submodular Maximization [10.089520556398575]
We consider an online optimization problem where at each step $tin[T]$, the algorithm chooses an action $x_t$ from the fixed convex and compact domain set $mathcalK$.
A utility function $f_t(cdot)$ is then revealed and the algorithm receives the payoff $f_t(x_t)$.
arXiv Detail & Related papers (2021-06-15T02:05:35Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Universal guarantees for decision tree induction via a higher-order
splitting criterion [16.832966312395126]
Our algorithm achieves provable guarantees for all target functions $f: -1,1n to -1,1$ with respect to the uniform distribution.
The crux of our extension is a new splitting criterion that takes into account the correlations between $f$ and small subsets of its attributes.
Our algorithm satisfies the following guarantee: for all target functions $f : -1,1n to -1,1$, sizes $sin mathbbN$, and error parameters $epsilon$, it constructs a decision
arXiv Detail & Related papers (2020-10-16T21:20:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.