A Characterization of Optimal-Rate Linear Homomorphic Secret Sharing Schemes, and Applications
- URL: http://arxiv.org/abs/2311.14842v1
- Date: Fri, 24 Nov 2023 20:40:50 GMT
- Title: A Characterization of Optimal-Rate Linear Homomorphic Secret Sharing Schemes, and Applications
- Authors: Keller Blackwell, Mary Wootters,
- Abstract summary: Homomorphic Secret Sharing (HSS) scheme is a secret-sharing scheme that shares a secret $x$ among $s$ servers.
A key parameter in HSS schemes is download rate, which quantifies how much information the output client needs to download from each server.
We show that -- perhaps surprisingly -- the set of $ell$'s for which their construction works is in fact nearly optimal.
- Score: 13.566464121222225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A Homomorphic Secret Sharing (HSS) scheme is a secret-sharing scheme that shares a secret $x$ among $s$ servers, and additionally allows an output client to reconstruct some function $f(x)$, using information that can be locally computed by each server. A key parameter in HSS schemes is download rate, which quantifies how much information the output client needs to download from each server. Recent work (Fosli, Ishai, Kolobov, and Wootters, ITCS 2022) established a fundamental limitation on the download rate of linear HSS schemes for computing low-degree polynomials, and gave an example of HSS schemes that meet this limit. In this paper, we further explore optimal-rate linear HSS schemes for polynomials. Our main result is a complete characterization of such schemes, in terms of a coding-theoretic notion that we introduce, termed optimal labelweight codes. We use this characterization to answer open questions about the amortization required by HSS schemes that achieve optimal download rate. In more detail, the construction of Fosli et al. required amortization over $\ell$ instances of the problem, and only worked for particular values of $\ell$. We show that -- perhaps surprisingly -- the set of $\ell$'s for which their construction works is in fact nearly optimal, possibly leaving out only one additional value of $\ell$. We show this by using our coding-theoretic characterization to prove a necessary condition on the $\ell$'s admitting optimal-rate linear HSS schemes. We then provide a slightly improved construction of optimal-rate linear HSS schemes, where the set of allowable $\ell$'s is optimal in even more parameter settings. Moreover, based on a connection to the MDS conjecture, we conjecture that our construction is optimal for all parameter regimes.
Related papers
- Transfer Q Star: Principled Decoding for LLM Alignment [105.89114186982972]
Transfer $Q*$ estimates the optimal value function for a target reward $r$ through a baseline model.
Our approach significantly reduces the sub-optimality gap observed in prior SoTA methods.
arXiv Detail & Related papers (2024-05-30T21:36:12Z) - Distributed Optimization with Feasible Set Privacy [35.16231062731263]
Two agents learn the optimal solution set while keeping their feasible sets $mathcalP1$ and $mathcalP1$ private from each other.
We adopt a sequential private information retrieval (SPIR) framework where one of the agents privately checks in $mathcalP$.
We show that, compared to privately acquiring the feasible set $mathcalP1$ using an SPIR-based private set intersection (PSI) protocol, and finding the optimum, our scheme is better as it incurs less information leakage and less download
arXiv Detail & Related papers (2023-12-04T18:45:04Z) - Learning to Relax: Setting Solver Parameters Across a Sequence of Linear System Instances [42.16343861129583]
We show that a bandit online learning algorithm can select parameters for a sequence of instances such that the overall cost approaches that of the best fixed $omega$ as the sequence length increases.
Our work provides the first learning-theoretic treatment of high-precision linear system solvers and the first end-to-end guarantees for data-driven scientific computing.
arXiv Detail & Related papers (2023-10-03T17:51:42Z) - Nearly Minimax Optimal Reinforcement Learning for Linear Markov Decision
Processes [80.89852729380425]
We propose the first computationally efficient algorithm that achieves the nearly minimax optimal regret $tilde O(dsqrtH3K)$.
Our work provides a complete answer to optimal RL with linear MDPs, and the developed algorithm and theoretical tools may be of independent interest.
arXiv Detail & Related papers (2022-12-12T18:58:59Z) - Shapley-NAS: Discovering Operation Contribution for Neural Architecture
Search [96.20505710087392]
We propose a Shapley value based method to evaluate operation contribution (Shapley-NAS) for neural architecture search.
We show that our method outperforms the state-of-the-art methods by a considerable margin with light search cost.
arXiv Detail & Related papers (2022-06-20T14:41:49Z) - Matching Pursuit Based Scheduling for Over-the-Air Federated Learning [67.59503935237676]
This paper develops a class of low-complexity device scheduling algorithms for over-the-air learning via the method of federated learning.
Compared to the state-of-the-art proposed scheme, the proposed scheme poses a drastically lower efficiency system.
The efficiency of the proposed scheme is confirmed via experiments on the CIFAR dataset.
arXiv Detail & Related papers (2022-06-14T08:14:14Z) - Provably Efficient Generative Adversarial Imitation Learning for Online
and Offline Setting with Linear Function Approximation [81.0955457177017]
In generative adversarial imitation learning (GAIL), the agent aims to learn a policy from an expert demonstration so that its performance cannot be discriminated from the expert policy on a certain reward set.
We study GAIL in both online and offline settings with linear function approximation, where both the transition and reward function are linear in the feature maps.
arXiv Detail & Related papers (2021-08-19T16:16:00Z) - Single-Server Private Linear Transformation: The Joint Privacy Case [10.072633952908456]
This paper introduces the problem of Private Linear Transformation (PLT) which generalizes the problems of private information retrieval and private linear computation.
The problem includes one or more remote server(s) storing (identical copies of) $K$ messages and a user who wants to compute $L$ independent linear combinations of a $D$-subset of messages.
arXiv Detail & Related papers (2021-06-09T17:09:22Z) - On Query-efficient Planning in MDPs under Linear Realizability of the
Optimal State-value Function [14.205660708980988]
We consider the problem of local planning in fixed-horizon Markov Decision Processes (MDPs) with a generative model.
A recent lower bound established that the related problem when the action-value function of the optimal policy is linearly realizable requires an exponential number of queries.
In this work, we establish that poly$(H, d)$ learning is possible (with state value function realizability) whenever the action set is small.
arXiv Detail & Related papers (2021-02-03T13:23:15Z) - Provably Efficient Safe Exploration via Primal-Dual Policy Optimization [105.7510838453122]
We study the Safe Reinforcement Learning (SRL) problem using the Constrained Markov Decision Process (CMDP) formulation.
We present an provably efficient online policy optimization algorithm for CMDP with safe exploration in the function approximation setting.
arXiv Detail & Related papers (2020-03-01T17:47:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.