Shuffle Private Stochastic Convex Optimization
- URL: http://arxiv.org/abs/2106.09805v1
- Date: Thu, 17 Jun 2021 20:44:00 GMT
- Title: Shuffle Private Stochastic Convex Optimization
- Authors: Albert Cheu and Matthew Joseph and Jieming Mao and Binghui Peng
- Abstract summary: In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler.
The shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy.
We present interactive shuffle protocols for convex optimization.
- Score: 20.379311972125034
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In shuffle privacy, each user sends a collection of randomized messages to a
trusted shuffler, the shuffler randomly permutes these messages, and the
resulting shuffled collection of messages must satisfy differential privacy.
Prior work in this model has largely focused on protocols that use a single
round of communication to compute algorithmic primitives like means,
histograms, and counts. In this work, we present interactive shuffle protocols
for stochastic convex optimization. Our optimization protocols rely on a new
noninteractive protocol for summing vectors of bounded $\ell_2$ norm. By
combining this sum subroutine with techniques including mini-batch stochastic
gradient descent, accelerated gradient descent, and Nesterov's smoothing
method, we obtain loss guarantees for a variety of convex loss functions that
significantly improve on those of the local model and sometimes match those of
the central model.
Related papers
- Efficient Fault-Tolerant Quantum Protocol for Differential Privacy in the Shuffle Model [2.0794380287086214]
We present a quantum protocol which implicitly implements a random shuffle to realize differential privacy in the shuffle model.
The shuffle model amplifies outcomes achievable from data contributors.
Examples include implementing the shuffle via mix-networks, or shuffling via a trusted third-party.
We propose a quantum version of the protocol using entanglement of quantum states and show that the shuffle can be implemented without these extra requirements.
arXiv Detail & Related papers (2024-09-06T04:53:19Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Fast Semisupervised Unmixing Using Nonconvex Optimization [80.11512905623417]
We introduce a novel convex convex model for semi/library-based unmixing.
We demonstrate the efficacy of Alternating Methods of sparse unsupervised unmixing.
arXiv Detail & Related papers (2024-01-23T10:07:41Z) - Distributed stochastic proximal algorithm with random reshuffling for
non-smooth finite-sum optimization [28.862321453597918]
Non-smooth finite-sum minimization is a fundamental problem in machine learning.
This paper develops a distributed proximal-gradient algorithm with random reshuffling to solve the problem.
arXiv Detail & Related papers (2021-11-06T07:29:55Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - On the Benefits of Multiple Gossip Steps in Communication-Constrained
Decentralized Optimization [29.42301299741866]
We show that having $O(logfrac1epsilon)$ iterations with constant step size - $O(logfrac1epsilon)$ - enables convergence to within $epsilon$ of the optimal value for smooth non- compressed gradient objectives.
To our knowledge, this is the first work that derives the convergence results for non optimization under compressed communication compression parameters.
arXiv Detail & Related papers (2020-11-20T21:17:32Z) - SGD with shuffling: optimal rates without component convexity and large
epoch requirements [60.65928290219793]
We consider the RandomShuffle (shuffle at the beginning of each epoch) and SingleShuffle (shuffle only once)
We establish minimax optimal convergence rates of these algorithms up to poly-log factor gaps.
We further sharpen the tight convergence results for RandomShuffle by removing the drawbacks common to all prior arts.
arXiv Detail & Related papers (2020-06-12T05:00:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.