Distributed Batch Matrix Multiplication: Trade-Offs in Download Rate, Randomness, and Privacy
- URL: http://arxiv.org/abs/2509.15047v1
- Date: Thu, 18 Sep 2025 15:10:43 GMT
- Title: Distributed Batch Matrix Multiplication: Trade-Offs in Download Rate, Randomness, and Privacy
- Authors: Amirhosein Morteza, Remi A. Chou,
- Abstract summary: We study the trade-off between communication rate and privacy for distributed batch matrix multiplication.<n>In our setting, $boldB$ is publicly accessible by all the servers while $boldA$ must remain private.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the trade-off between communication rate and privacy for distributed batch matrix multiplication of two independent sequences of matrices $\bold{A}$ and $\bold{B}$ with uniformly distributed entries. In our setting, $\bold{B}$ is publicly accessible by all the servers while $\bold{A}$ must remain private. A user is interested in evaluating the product $\bold{AB}$ with the responses from the $k$ fastest servers. For a given parameter $\alpha \in [0, 1]$, our privacy constraint must ensure that any set of $\ell$ colluding servers cannot learn more than a fraction $\alpha$ of $\bold{A}$. Additionally, we study the trade-off between the amount of local randomness needed at the encoder and privacy. Finally, we establish the optimal trade-offs when the matrices are square and identify a linear relationship between information leakage and communication rate.
Related papers
- Secure Multi-User Linearly-Separable Distributed Computing [18.72935137242268]
We show how a parallel treatment of users can yield large parallelization gains with relatively low computation and communication costs.<n>While this approach provides near-optimal performance, its linear nature has raised data secrecy concerns.<n>We here adopt an information-theoretic secrecy framework, seeking guarantees that each user can learn nothing more than its own requested function.
arXiv Detail & Related papers (2026-02-02T18:59:10Z) - Beyond Covariance Matrix: The Statistical Complexity of Private Linear Regression [66.93988594607842]
Under privacy constraints, the complexity of private linear regression is emphnot captured by the usual covariance matrix.<n>We introduce an Information-Weighted Regression method that attains the optimal rates.<n> Notably, our results demonstrate that joint privacy comes at almost no additional cost.
arXiv Detail & Related papers (2025-02-18T18:35:24Z) - Optimized Tradeoffs for Private Prediction with Majority Ensembling [59.99331405291337]
We introduce the Data-dependent Randomized Response Majority (DaRRM) algorithm.<n>DaRRM is parameterized by a data-dependent noise function $gamma$, and enables efficient utility optimization over the class of all private algorithms.<n>We show that DaRRM provably enjoys a privacy gain of a factor of 2 over common baselines, with fixed utility.
arXiv Detail & Related papers (2024-11-27T00:48:48Z) - Federated Combinatorial Multi-Agent Multi-Armed Bandits [79.1700188160944]
This paper introduces a federated learning framework tailored for online optimization with bandit.
In this setting, agents subsets of arms, observe noisy rewards for these subsets without accessing individual arm information, and can cooperate and share information at specific intervals.
arXiv Detail & Related papers (2024-05-09T17:40:09Z) - Private Distribution Learning with Public Data: The View from Sample
Compression [15.626115475759713]
We study the problem of private distribution learning with access to public data.
We show that the public-private learnability of a class $mathcal Q$ is connected to the existence of a sample compression scheme.
arXiv Detail & Related papers (2023-08-11T17:15:12Z) - A Simple and Provably Efficient Algorithm for Asynchronous Federated
Contextual Linear Bandits [77.09836892653176]
We study federated contextual linear bandits, where $M$ agents cooperate with each other to solve a global contextual linear bandit problem with the help of a central server.
We consider the asynchronous setting, where all agents work independently and the communication between one agent and the server will not trigger other agents' communication.
We prove that the regret of textttFedLinUCB is bounded by $tildeO(dsqrtsum_m=1M T_m)$ and the communication complexity is $tildeO(dM
arXiv Detail & Related papers (2022-07-07T06:16:19Z) - Distributed Contextual Linear Bandits with Minimax Optimal Communication
Cost [48.288452411283444]
We study distributed contextual linear bandits with contexts, where $N$ agents act cooperatively to solve a linear bandit-optimization problem with $d$-dimensional features.
We propose a distributed batch elimination version of the LinUCB algorithm, DisBE-LUCB, where the agents share information among each other through a central server.
We prove that over $T$ rounds ($NT$ actions in total) the communication cost of DisBE-LUCB is only $tildemathcalO(dN)$ and its regret is at most $tildemathcalO
arXiv Detail & Related papers (2022-05-26T05:56:23Z) - Robust Estimation of Discrete Distributions under Local Differential
Privacy [1.52292571922932]
We consider the problem of estimating a discrete distribution in total variation from $n$ contaminated data batches under a local differential privacy constraint.
We show that combining the two constraints leads to a minimax estimation rate of $epsilonsqrtd/alpha2 k+sqrtd2/alpha2 kn$ up to a $sqrtlog (1/epsilon)$ factor.
arXiv Detail & Related papers (2022-02-14T15:59:02Z) - Locally Differentially Private Reinforcement Learning for Linear Mixture
Markov Decision Processes [78.27542864367821]
Reinforcement learning (RL) algorithms can be used to provide personalized services, which rely on users' private and sensitive data.
To protect the users' privacy, privacy-preserving RL algorithms are in demand.
We propose a novel $(varepsilon, delta)$-LDP algorithm for learning a class of Markov decision processes (MDPs) dubbed linear mixture MDPs.
arXiv Detail & Related papers (2021-10-19T17:44:09Z) - Single-Server Private Linear Transformation: The Joint Privacy Case [10.072633952908456]
This paper introduces the problem of Private Linear Transformation (PLT) which generalizes the problems of private information retrieval and private linear computation.
The problem includes one or more remote server(s) storing (identical copies of) $K$ messages and a user who wants to compute $L$ independent linear combinations of a $D$-subset of messages.
arXiv Detail & Related papers (2021-06-09T17:09:22Z) - Federated Bandit: A Gossiping Approach [22.595542913855624]
We study emphFederated Bandit, a decentralized Multi-Armed Bandit problem with a set of $N$ agents, who can only communicate their local data with neighbors described by a connected graph $G$.
We show that Gossip_UCB successfully adapts local bandit learning into a global gossiping process for sharing information among connected agents, and achieves guaranteed regret at the order of $O(max textttpoly(N,M) log T, textttpoly(N,M) log_lambda-1
arXiv Detail & Related papers (2020-10-24T03:44:25Z) - On Distributed Differential Privacy and Counting Distinct Elements [52.701425652208734]
We study the setup where each of $n$ users holds an element from a discrete set.
The goal is to count the number of distinct elements across all users.
arXiv Detail & Related papers (2020-09-21T04:13:34Z) - (Locally) Differentially Private Combinatorial Semi-Bandits [26.462686161971803]
We study Combinatorial Semi-Bandits (CSB) that is an extension of classic Multi-Armed Bandits (MAB) under Differential Privacy (DP) and stronger Local Differential Privacy (LDP) setting.
arXiv Detail & Related papers (2020-06-01T04:23:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.