Private Online Prefix Sums via Optimal Matrix Factorizations
- URL: http://arxiv.org/abs/2202.08312v1
- Date: Wed, 16 Feb 2022 19:39:58 GMT
- Title: Private Online Prefix Sums via Optimal Matrix Factorizations
- Authors: Brendan McMahan, Keith Rush and Abhradeep Guha Thakurta
- Abstract summary: In particular, we characterize optimal factorizations of linear queries under online constraints, deriving existence, uniqueness, and explicit expressions.
These solutions improve over the existing state-of-the-art by a significant constant factor, and avoid some of the artifacts introduced by the use of the tree data structure.
- Score: 8.164433158925592
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Motivated by differentially-private (DP) training of machine learning models
and other applications, we investigate the problem of computing prefix sums in
the online (streaming) setting with DP. This problem has previously been
addressed by special-purpose tree aggregation schemes with hand-crafted
estimators. We show that these previous schemes can all be viewed as specific
instances of a broad class of matrix-factorization-based DP mechanisms, and
that in fact much better mechanisms exist in this class.
In particular, we characterize optimal factorizations of linear queries under
online constraints, deriving existence, uniqueness, and explicit expressions
that allow us to efficiently compute optimal mechanisms, including for online
prefix sums. These solutions improve over the existing state-of-the-art by a
significant constant factor, and avoid some of the artifacts introduced by the
use of the tree data structure.
Related papers
- Structured Regularization for Constrained Optimization on the SPD Manifold [1.1126342180866644]
We introduce a class of structured regularizers, based on symmetric gauge functions, which allow for solving constrained optimization on the SPD manifold with faster unconstrained methods.
We show that our structured regularizers can be chosen to preserve or induce desirable structure, in particular convexity and "difference of convex" structure.
arXiv Detail & Related papers (2024-10-12T22:11:22Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Refined Mechanism Design for Approximately Structured Priors via Active
Regression [50.71772232237571]
We consider the problem of a revenue-maximizing seller with a large number of items for sale to $n$ strategic bidders.
It is well-known that optimal and even approximately-optimal mechanisms for this setting are notoriously difficult to characterize or compute.
arXiv Detail & Related papers (2023-10-11T20:34:17Z) - Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning [18.55306294638515]
We introduce new differentially private (DP) mechanisms for computation-based machine learning (ML) with multiple passes (epochs) over a dataset.
We formalize the problem of DP mechanisms for adaptive streams with multiple participations and introduce a non-trivial extension of online matrix factorization DP mechanisms.
arXiv Detail & Related papers (2022-11-12T00:41:11Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - Joint Continuous and Discrete Model Selection via Submodularity [1.332560004325655]
In model selection problems for machine learning, the desire for a well-performing model with meaningful structure is typically expressed through a regularized optimization problem.
In many scenarios, however, numerically meaningful structure is specified in some discrete space leading to difficult non optimization problems.
We show how simple continuous or discrete constraints can also be handled for certain problem classes, motivated by robust optimization.
arXiv Detail & Related papers (2021-02-17T21:14:47Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Linear Matrix Factorization Embeddings for Single-objective Optimization
Landscapes [0.755972004983746]
In black-box optimization, features have to be derived from a set of $(x,f(x))$ samples.
We show that a linear dimensionality reduction via matrix factorization significantly contributes towards a better detection of correlation between different problem instances.
arXiv Detail & Related papers (2020-09-30T08:46:54Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.