Symmetric Boolean Factor Analysis with Applications to InstaHide
- URL: http://arxiv.org/abs/2102.01570v1
- Date: Tue, 2 Feb 2021 15:52:52 GMT
- Title: Symmetric Boolean Factor Analysis with Applications to InstaHide
- Authors: Sitan Chen, Zhao Song, Runzhou Tao, Ruizhe Zhang
- Abstract summary: We show that InstaHide possesses some form of "fine-grained security" against reconstruction attacks for moderately large k.
Our algorithm, based on tensor decomposition, only requires m to be at least quasi-linear in r.
We complement this result with a quasipolynomial-time algorithm for a worst-case setting of the problem where the collection of k-sparse vectors is chosen arbitrarily.
- Score: 18.143740528953142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we examine the security of InstaHide, a recently proposed scheme
for distributed learning (Huang et al.). A number of recent works have given
reconstruction attacks for InstaHide in various regimes by leveraging an
intriguing connection to the following matrix factorization problem: given the
Gram matrix of a collection of m random k-sparse Boolean vectors in {0,1}^r,
recover the vectors (up to the trivial symmetries). Equivalently, this can be
thought of as a sparse, symmetric variant of the well-studied problem of
Boolean factor analysis, or as an average-case version of the classic problem
of recovering a k-uniform hypergraph from its line graph.
As previous algorithms either required m to be exponentially large in k or
only applied to k = 2, they left open the question of whether InstaHide
possesses some form of "fine-grained security" against reconstruction attacks
for moderately large k. In this work, we answer this in the negative by giving
a simple O(m^{\omega + 1}) time algorithm for the above matrix factorization
problem. Our algorithm, based on tensor decomposition, only requires m to be at
least quasi-linear in r. We complement this result with a quasipolynomial-time
algorithm for a worst-case setting of the problem where the collection of
k-sparse vectors is chosen arbitrarily.
Related papers
- Riemannian Optimization for Non-convex Euclidean Distance Geometry with Global Recovery Guarantees [6.422262171968397]
Two algorithms are proposed to solve the Euclidean Distance Geometry problem.
First algorithm converges linearly to the true solution.
Second algorithm demonstrates strong numerical performance on both synthetic and real data.
arXiv Detail & Related papers (2024-10-08T21:19:22Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Quantization of Large Language Models with an Overdetermined Basis [73.79368761182998]
We introduce an algorithm for data quantization based on the principles of Kashin representation.
Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance.
arXiv Detail & Related papers (2024-04-15T12:38:46Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Alternating Mahalanobis Distance Minimization for Stable and Accurate CP
Decomposition [4.847980206213335]
We introduce a new formulation for deriving singular values and vectors of a tensor by considering the critical points of a function different from what is used in the previous work.
We show that a subsweep of this algorithm can achieve a superlinear convergence rate for exact CPD with known rank.
We then view the algorithm as optimizing a Mahalanobis distance with respect to each factor with ground metric dependent on the other factors.
arXiv Detail & Related papers (2022-04-14T19:56:36Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Nonlinear matrix recovery using optimization on the Grassmann manifold [18.655422834567577]
We investigate the problem of recovering a partially observed high-rank clustering matrix whose columns obey a nonlinear structure such as a union of subspaces.
We show that the alternating limit converges to a unique point using the Kurdyka-Lojasi property.
arXiv Detail & Related papers (2021-09-13T16:13:13Z) - Near-Optimal Algorithms for Linear Algebra in the Current Matrix
Multiplication Time [46.31710224483631]
We show how to bypass the main open question of Nelson and Nguyen (FOCS, 2013) regarding the logarithmic factors in the sketching dimension for existing constant factor approximation oblivious subspace embeddings.
A key technique we use is an explicit mapping of Indyk based on uncertainty principles and extractors.
For the fundamental problems of rank computation and finding a linearly independent subset of columns, our algorithms improve Cheung, Kwok, and Lau (JACM, 2013) and are optimal to within a constant factor and a $loglog(n)$-factor, respectively.
arXiv Detail & Related papers (2021-07-16T19:34:10Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.