Succinct Oblivious Tensor Evaluation and Applications: Adaptively-Secure Laconic Function Evaluation and Trapdoor Hashing for All Circuits
- URL: http://arxiv.org/abs/2508.09673v1
- Date: Wed, 13 Aug 2025 10:03:03 GMT
- Title: Succinct Oblivious Tensor Evaluation and Applications: Adaptively-Secure Laconic Function Evaluation and Trapdoor Hashing for All Circuits
- Authors: Damiano Abram, Giulio Malavolta, Lawrence Roy,
- Abstract summary: Two parties compute an additive secret sharing of a tensor product of two vectors $mathbfx otimes mathbfy$.<n>We present a construction of OTE with optimal complexity from the standard learning with errors (LWE) problem.<n>We show how this new technical tool enables a host of cryptographic primitives, all with security reducible to LWE.
- Score: 17.82458125377762
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose the notion of succinct oblivious tensor evaluation (OTE), where two parties compute an additive secret sharing of a tensor product of two vectors $\mathbf{x} \otimes \mathbf{y}$, exchanging two simultaneous messages. Crucially, the size of both messages and of the CRS is independent of the dimension of $\mathbf{x}$. We present a construction of OTE with optimal complexity from the standard learning with errors (LWE) problem. Then we show how this new technical tool enables a host of cryptographic primitives, all with security reducible to LWE, such as: * Adaptively secure laconic function evaluation for depth-$D$ functions $f:\{0, 1\}^m\rightarrow\{0, 1\}^\ell$ with communication $m+\ell+D\cdot \mathrm{poly}(\lambda)$. * A trapdoor hash function for all functions. * An (optimally) succinct homomorphic secret sharing for all functions. * A rate-$1/2$ laconic oblivious transfer for batch messages, which is best possible. In particular, we obtain the first laconic function evaluation scheme that is adaptively secure from the standard LWE assumption, improving upon Quach, Wee, and Wichs (FOCS 2018). As a key technical ingredient, we introduce a new notion of \emph{adaptive lattice encodings}, which may be of independent interest.
Related papers
- An Efficient Computational Framework for Discrete Fuzzy Numbers Based on Total Orders [41.99844472131922]
We introduce algorithms that exploit the structure of total (admissible) orders to compute the $textitpos$ function.<n>The proposed approach achieves a complexity of $mathcalO(n2 m log n)$, which is quadratic in the size of the underlying chain.<n>The results demonstrate that this formulation substantially reduces computational cost.
arXiv Detail & Related papers (2025-11-21T09:35:07Z) - Enjoying Non-linearity in Multinomial Logistic Bandits [56.28491566735463]
We consider the multinomial logistic bandit problem, a variant of where a learner interacts with an environment by selecting actions to maximize expected rewards.<n>We extend recent work on understanding the impact of the non-linearity of the logistic model to the multinomial setting and propose an efficient algorithm.<n>Our method yields a problem-dependent regret bound of order $ smashwidetildemathcalO( R d sqrtKT/kappa_*) $, where $R$ is the norm of the vector of rewards and $K$
arXiv Detail & Related papers (2025-07-07T08:18:25Z) - Optimal Convergence Rates of Deep Neural Network Classifiers [25.56187933090708]
We study the binary classification problem on $[0,1]d$ under the Tsybakov noise condition.<n>We prove that the optimal convergence rate for the excess 0-1 risk of classifiers is $$ left( frac1n right)fracbetacdot (1wedgebeta)qfracd_*s+1+ (1+frac1s+1)cdotbetacdot (1wedgebeta)q;;, $$ which is independent of the input
arXiv Detail & Related papers (2025-06-17T18:13:09Z) - Sign Operator for Coping with Heavy-Tailed Noise in Non-Convex Optimization: High Probability Bounds Under $(L_0, L_1)$-Smoothness [74.18546828528298]
We show that SignSGD with Majority Voting can robustly work on the whole range of complexity with $kappakappakappakappa-1right, kappakappakappa-1right, kappakappakappa-1right, kappakappakappa-1right, kappakappakappa-1right, kappakappakappa-1right, kappakappakappa-1right, kappa
arXiv Detail & Related papers (2025-02-11T19:54:11Z) - Variance-Reduced Fast Krasnoselkii-Mann Methods for Finite-Sum Root-Finding Problems [8.0153031008486]
We propose a new class of fast Krasnoselkii--Mann methods with variance reduction to solve a finite-sum co-coercive equation $Gx = 0$.<n>Our algorithm is single-loop and leverages a new family of unbiased variance-reduced estimators specifically designed for a wider class of root-finding algorithms.<n> numerical experiments validate our algorithms and demonstrate their promising performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-06-04T15:23:29Z) - Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation [67.8414514524356]
We study a new class of MDPs that employs multinomial logit (MNL) function approximation to ensure valid probability distributions over the state space.<n>Despite its significant benefits, incorporating the non-linear function raises substantial challenges in both statistical and computational efficiency.<n>We propose an algorithm that achieves a regret of $widetildemathcalO(dH2sqrtK + kappa-1d2H2)$, eliminating the dependence on $kappa-1$ in the dominant term for the first time
arXiv Detail & Related papers (2024-05-27T11:31:54Z) - Corruption-Robust Offline Reinforcement Learning with General Function
Approximation [60.91257031278004]
We investigate the problem of corruption in offline reinforcement learning (RL) with general function approximation.
Our goal is to find a policy that is robust to such corruption and minimizes the suboptimality gap with respect to the optimal policy for the uncorrupted Markov decision processes (MDPs)
arXiv Detail & Related papers (2023-10-23T04:07:26Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Scalable Primal-Dual Actor-Critic Method for Safe Multi-Agent RL with
General Utilities [12.104551746465932]
We investigate safe multi-agent reinforcement learning, where agents seek to collectively maximize an aggregate sum of local objectives while satisfying their own safety constraints.
Our algorithm converges to a first-order stationary point (FOSP) at the rate of $mathcalOleft(T-2/3right)$.
In the sample-based setting, we demonstrate that, with high probability, our algorithm requires $widetildemathcalOleft(epsilon-3.5right)$ samples to achieve an $epsilon$-FOSP.
arXiv Detail & Related papers (2023-05-27T20:08:35Z) - Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems [11.15373699918747]
We develop two new algorithms to approximate a solution of nonlinear equations in large-scale settings.<n>We apply our methods to a class of large-scale finite-sum inclusions, which covers prominent applications in machine learning, statistical learning, and network optimization.
arXiv Detail & Related papers (2023-01-08T21:46:27Z) - An Efficient Stochastic Algorithm for Decentralized Nonconvex-Strongly-Concave Minimax Optimization [25.00475462213752]
Decentralized Recursive desc. Method (DREAM)
Concretely, it requires $mathcalO(minminappaappa3eps-3,kappa2 N)$ first-order oracle (SFO) calls and $tildemathcalO(kappa2 epsilon-2) communication rounds.
Our numerical experiments validate the superiority of previous methods.
arXiv Detail & Related papers (2022-12-05T16:09:39Z) - On the Complexity of Decentralized Smooth Nonconvex Finite-Sum Optimization [21.334985032433778]
Decentralized optimization problem $min_bf xinmathbb Rd f(bf x)triq frac1msum_i=1m f_i(bf x)triq frac1nsum_j=1n.
arXiv Detail & Related papers (2022-10-25T11:37:11Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Nearly Optimal Regret for Learning Adversarial MDPs with Linear Function
Approximation [92.3161051419884]
We study the reinforcement learning for finite-horizon episodic Markov decision processes with adversarial reward and full information feedback.
We show that it can achieve $tildeO(dHsqrtT)$ regret, where $H$ is the length of the episode.
We also prove a matching lower bound of $tildeOmega(dHsqrtT)$ up to logarithmic factors.
arXiv Detail & Related papers (2021-02-17T18:54:08Z) - Safe Learning under Uncertain Objectives and Constraints [66.05180398174286]
We consider non-textitunknown yet safety-critical optimization problems under textitunknown yet safety-critical constraints.
Such problems naturally arise in a variety of domains including robotics, manufacturing, and medical procedures.
A crucial component of our analysis is to introduce and apply a technique called shrinkage in the context of safe optimization.
arXiv Detail & Related papers (2020-06-23T20:51:00Z) - Provably Efficient Safe Exploration via Primal-Dual Policy Optimization [105.7510838453122]
We study the Safe Reinforcement Learning (SRL) problem using the Constrained Markov Decision Process (CMDP) formulation.
We present an provably efficient online policy optimization algorithm for CMDP with safe exploration in the function approximation setting.
arXiv Detail & Related papers (2020-03-01T17:47:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.