Improved device-independent randomness expansion rates using two sided
randomness
- URL: http://arxiv.org/abs/2103.07504v3
- Date: Thu, 2 Mar 2023 20:04:13 GMT
- Title: Improved device-independent randomness expansion rates using two sided
randomness
- Authors: Rutvij Bhavsar and Sammy Ragy and Roger Colbeck
- Abstract summary: A device-independent randomness expansion protocol aims to take an initial random string and generate a longer one.
We investigate the possible improvement that could be gained using the two-sided randomness.
We also consider a modified protocol in which the input randomness is recycled.
- Score: 3.4376560669160394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A device-independent randomness expansion protocol aims to take an initial
random string and generate a longer one, where the security of the protocol
does not rely on knowing the inner workings of the devices used to run it. In
order to do so, the protocol tests that the devices violate a Bell inequality
and one then needs to bound the amount of extractable randomness in terms of
the observed violation. The entropy accumulation theorem lower bounds the
extractable randomness of a protocol with many rounds in terms of the
single-round von Neumann entropy of any strategy achieving the observed score.
Tight bounds on the von Neumann entropy are known for the one-sided randomness
(i.e., where the randomness from only one party is used) when using the
Clauser-Horne-Shimony-Holt (CHSH) game. Here we investigate the possible
improvement that could be gained using the two-sided randomness. We generate
upper bounds on this randomness by attempting to find the optimal eavesdropping
strategy, providing analytic formulae in two cases. We additionally compute
lower bounds that outperform previous ones and can be made arbitrarily tight
(at the expense of more computation time). These bounds get close to our upper
bounds, and hence we conjecture that our upper bounds are tight. We also
consider a modified protocol in which the input randomness is recycled. This
modified protocol shows the possibility of rate gains of several orders of
magnitude based on recent experimental parameters, making device-independent
randomness expansion significantly more practical. It also enables the locality
loophole to be closed while expanding randomness in a way that typical
spot-checking protocols do not.
Related papers
- Local contextuality-based self-tests are sufficient for randomness expansion secure against quantum adversaries [0.0]
We show that local contextuality-based self-tests are sufficient to construct a randomness expansion protocol that is secure against unbounded quantum adversaries.
Our protocol is based on self-testing from non-contextuality inequalities and we prove that our schemeally produces secure random numbers which are $mathcalO(mstepsilon)$-close to uniformly distributed and private.
arXiv Detail & Related papers (2024-09-30T08:31:46Z) - Incorporating Zero-Probability Constraints to Device-Independent
Randomness Expansion [11.765274200974774]
We explore various forms of randomness that are certifiable in the so-called device-independent (DI) paradigm.
In this work, we determine the certifiable randomness when zero-probability constraints are incorporated into the task of DI randomness expansion.
arXiv Detail & Related papers (2024-01-16T15:57:17Z) - Improvements on Device Independent and Semi-Device Independent Protocols
of Randomness Expansion [0.0]
Device Independent (DI) and Semi-Device Independent (semi-DI) protocols of randomness expansion are discussed.
We introduce enhanced DI and semi-DI protocols that surpass existing ones in terms of output randomness rate, security, or in some instances, both.
A notable contribution is the introduction of randomness expansion protocols that recycle input randomness, significantly enhancing finite round randomness rates for DI protocols based on the CHSH inequality violation.
arXiv Detail & Related papers (2023-11-22T17:03:04Z) - A Game-theoretic Approach for Provably-Uniform Random Number Generation in Decentralized Networks [0.6216023343793144]
We provide a protocol for distributed generation of randomness.
It is trustless and generates unbiased random numbers.
It is also tamper-proof and no party can change the output or affect its distribution.
arXiv Detail & Related papers (2023-09-20T12:21:39Z) - Testing randomness of series generated in Bell's experiment [62.997667081978825]
We use a toy fiber optic based setup to generate binary series, and evaluate their level of randomness according to Ville principle.
Series are tested with a battery of standard statistical indicators, Hurst, Kolmogorov complexity, minimum entropy, Takensarity dimension of embedding, and Augmented Dickey Fuller and Kwiatkowski Phillips Schmidt Shin to check station exponent.
The level of randomness of series obtained by applying Toeplitz extractor to rejected series is found to be indistinguishable from the level of non-rejected raw ones.
arXiv Detail & Related papers (2022-08-31T17:39:29Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - Algorithms for Adaptive Experiments that Trade-off Statistical Analysis
with Reward: Combining Uniform Random Assignment and Reward Maximization [50.725191156128645]
Multi-armed bandit algorithms like Thompson Sampling can be used to conduct adaptive experiments.
We present simulations for 2-arm experiments that explore two algorithms that combine the benefits of uniform randomization for statistical analysis.
arXiv Detail & Related papers (2021-12-15T22:11:58Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - On the Optimality of Randomization in Experimental Design: How to
Randomize for Minimax Variance and Design-Based Inference [58.442274475425144]
I study the minimax-optimal design for a two-arm controlled experiment where conditional mean outcomes may vary in a given set.
The optimal design is shown to be the mixed-strategy optimal design (MSOD) of Kallus.
I therefore propose the inference-constrained MSOD, which is minimax-optimal among all designs subject to such constraints.
arXiv Detail & Related papers (2020-05-06T21:43:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.