Secure Multi-User Linearly-Separable Distributed Computing
- URL: http://arxiv.org/abs/2602.02489v1
- Date: Mon, 02 Feb 2026 18:59:10 GMT
- Title: Secure Multi-User Linearly-Separable Distributed Computing
- Authors: Amir Masoud Jafarpisheh, Ali Khalesi, Petros Elia,
- Abstract summary: We show how a parallel treatment of users can yield large parallelization gains with relatively low computation and communication costs.<n>While this approach provides near-optimal performance, its linear nature has raised data secrecy concerns.<n>We here adopt an information-theoretic secrecy framework, seeking guarantees that each user can learn nothing more than its own requested function.
- Score: 18.72935137242268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The introduction of the new multi-user linearly-separable distributed computing framework, has recently revealed how a parallel treatment of users can yield large parallelization gains with relatively low computation and communication costs. These gains stem from a new approach that converts the computing problem into a sparse matrix factorization problem; a matrix $F$ that describes the users' requests, is decomposed as \(F = DE\), where a \(γ\)-sparse \(E\) defines the task allocation across $N$ servers, and a \(δ\)-sparse \(D\) defines the connectivity between \(N\) servers and \(K\) users as well as the decoding process. While this approach provides near-optimal performance, its linear nature has raised data secrecy concerns. We here adopt an information-theoretic secrecy framework, seeking guarantees that each user can learn nothing more than its own requested function. In this context, our main result provides two necessary and sufficient secrecy criteria; (i) for each user \(k\) who observes $α_k$ server responses, the common randomness visible to that user must span a subspace of dimension exactly $α_k-1$, and (ii) for each user, removing from \(\mathbf{D}\) the columns corresponding to the servers it observes must leave a matrix of rank at least \(K-1\). With these conditions in place, we design a general scheme -- that applies to finite and non-finite fields alike -- which is based on appending to \(\mathbf{E}\) a basis of \(\mathrm{Null}(\mathbf{D})\) and by carefully injecting shared randomness. In many cases, this entails no additional costs. The scheme, while maintaining performance, guarantees perfect information-theoretic secrecy in the case of finite fields, while in the real case, the conditions yield an explicit mutual-information bound that can be made arbitrarily small by increasing the variance of Gaussian common randomness.
Related papers
- Private Model Personalization Revisited [13.4143747448136]
We study model personalization under user-level differential privacy (DP) in the shared representation framework.<n>Our goal is to privately recover the shared embedding and the local low-dimensional representations with small excess risk.<n>We present an information-theoretic construction to privately learn the shared embedding and derive a margin-based accuracy guarantee.
arXiv Detail & Related papers (2025-06-24T00:57:17Z) - A Linearized Alternating Direction Multiplier Method for Federated Matrix Completion Problems [2.2217927229805032]
Matrix completion is fundamental for predicting missing data with a wide range of applications in personalized healthcare, e-commerce, recommendation systems, and social network analysis.<n>Traditional matrix completion approaches typically assume centralized data storage.<n>We propose textttFedMC-ADMM for solving federated matrix completion problems.
arXiv Detail & Related papers (2025-03-17T01:57:06Z) - Beyond Covariance Matrix: The Statistical Complexity of Private Linear Regression [66.93988594607842]
Under privacy constraints, the complexity of private linear regression is emphnot captured by the usual covariance matrix.<n>We introduce an Information-Weighted Regression method that attains the optimal rates.<n> Notably, our results demonstrate that joint privacy comes at almost no additional cost.
arXiv Detail & Related papers (2025-02-18T18:35:24Z) - Optimal Oblivious Subspace Embeddings with Near-optimal Sparsity [3.9657575162895196]
An oblivious subspace embedding is a random $mtimes n$ matrix $Pi$ that preserves the norms of all vectors in that subspace within a $1pmepsilon$ factor.<n>We give an oblivious subspace embedding with the optimal dimension $m=Theta(d/epsilon2)$ that has a near-optimal sparsity of $tilde O (1/epsilon)$ non-zero entries per column of $Pi$.
arXiv Detail & Related papers (2024-11-13T16:58:51Z) - On Computing Pairwise Statistics with Local Differential Privacy [55.81991984375959]
We study the problem of computing pairwise statistics, i.e., ones of the form $binomn2-1 sum_i ne j f(x_i, x_j)$, where $x_i$ denotes the input to the $i$th user, with differential privacy (DP) in the local model.
This formulation captures important metrics such as Kendall's $tau$ coefficient, Area Under Curve, Gini's mean difference, Gini's entropy, etc.
arXiv Detail & Related papers (2024-06-24T04:06:09Z) - Federated Combinatorial Multi-Agent Multi-Armed Bandits [79.1700188160944]
This paper introduces a federated learning framework tailored for online optimization with bandit.
In this setting, agents subsets of arms, observe noisy rewards for these subsets without accessing individual arm information, and can cooperate and share information at specific intervals.
arXiv Detail & Related papers (2024-05-09T17:40:09Z) - Some Constructions of Private, Efficient, and Optimal $K$-Norm and Elliptic Gaussian Noise [54.34628844260993]
Differentially private computation often begins with a bound on some $d$-dimensional statistic's sensitivity.
For pure differential privacy, the $K$-norm mechanism can improve on this approach using a norm tailored to the statistic's sensitivity space.
This paper solves both problems for the simple statistics of sum, count, and vote.
arXiv Detail & Related papers (2023-09-27T17:09:36Z) - Optimal Algorithms for Latent Bandits with Cluster Structure [50.44722775727619]
We consider the problem of latent bandits with cluster structure where there are multiple users, each with an associated multi-armed bandit problem.
We propose LATTICE which allows exploitation of the latent cluster structure to provide the minimax optimal regret of $widetildeO(sqrt(mathsfM+mathsfN)mathsfT.
arXiv Detail & Related papers (2023-01-17T17:49:04Z) - Scalable Differentially Private Clustering via Hierarchically Separated
Trees [82.69664595378869]
We show that our method computes a solution with cost at most $O(d3/2log n)cdot OPT + O(k d2 log2 n / epsilon2)$, where $epsilon$ is the privacy guarantee.
Although the worst-case guarantee is worse than that of state of the art private clustering methods, the algorithm we propose is practical.
arXiv Detail & Related papers (2022-06-17T09:24:41Z) - Single-Server Private Linear Transformation: The Joint Privacy Case [10.072633952908456]
This paper introduces the problem of Private Linear Transformation (PLT) which generalizes the problems of private information retrieval and private linear computation.
The problem includes one or more remote server(s) storing (identical copies of) $K$ messages and a user who wants to compute $L$ independent linear combinations of a $D$-subset of messages.
arXiv Detail & Related papers (2021-06-09T17:09:22Z) - Fuzzy Clustering with Similarity Queries [56.96625809888241]
The fuzzy or soft objective is a popular generalization of the well-known $k$-means problem.
We show that by making few queries, the problem becomes easier to solve.
arXiv Detail & Related papers (2021-06-04T02:32:26Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Towards Tight Communication Lower Bounds for Distributed Optimisation [30.134447658488057]
We consider a standard distributed optimisation setting where $N$ machines aim to jointly minimise the sum of the functions $sum_i = 1N f_i (x)$.
Our main result provides the first fully unconditional bounds on total number of bits which need to be sent and received by the $N$ machines.
We show that $Omega( Nd log d / Nvarepsilon)$ total bits need to be communicated between the machines to find an additive $epsilon$-approximation to the minimum of $sum_i = 1N
arXiv Detail & Related papers (2020-10-16T08:10:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.