Generalised entropy accumulation
- URL: http://arxiv.org/abs/2203.04989v2
- Date: Fri, 28 Oct 2022 14:04:52 GMT
- Title: Generalised entropy accumulation
- Authors: Tony Metger, Omar Fawzi, David Sutter, Renato Renner
- Abstract summary: This is a generalisation of the entropy accumulation theorem (EAT)
Due to its more general model of side-information, our generalised EAT can be applied more easily and to a broader range of cryptographic protocols.
As examples, we give the first multi-round security proof for blind expansion and a simplified analysis of the E91 QKD protocol.
- Score: 14.595665255353206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consider a sequential process in which each step outputs a system $A_i$ and
updates a side information register $E$. We prove that if this process
satisfies a natural "non-signalling" condition between past outputs and future
side information, the min-entropy of the outputs $A_1, \dots, A_n$ conditioned
on the side information $E$ at the end of the process can be bounded from below
by a sum of von Neumann entropies associated with the individual steps. This is
a generalisation of the entropy accumulation theorem (EAT), which deals with a
more restrictive model of side information: there, past side information cannot
be updated in subsequent rounds, and newly generated side information has to
satisfy a Markov condition. Due to its more general model of side-information,
our generalised EAT can be applied more easily and to a broader range of
cryptographic protocols. As examples, we give the first multi-round security
proof for blind randomness expansion and a simplified analysis of the E91 QKD
protocol. The proof of our generalised EAT relies on a new variant of Uhlmann's
theorem and new chain rules for the Renyi divergence and entropy, which might
be of independent interest.
Related papers
- More on the Operator Space Entanglement (OSE): Rényi OSE, revivals, and integrability breaking [0.0]
We investigate the dynamics of the R'enyi Operator Spaceanglement ($OSE$) entropies $S_n$ across several one-dimensional integrable and chaotic models.
Our numerical results reveal that the R'enyi $OSE$ entropies of diagonal operators with nonzero trace saturate at long times.
In finite-size integrable systems, $S_n$ exhibit strong revivals, which are washed out when integrability is broken.
arXiv Detail & Related papers (2024-10-24T17:17:29Z) - Non-asymptotic bounds for forward processes in denoising diffusions: Ornstein-Uhlenbeck is hard to beat [49.1574468325115]
This paper presents explicit non-asymptotic bounds on the forward diffusion error in total variation (TV)
We parametrise multi-modal data distributions in terms of the distance $R$ to their furthest modes and consider forward diffusions with additive and multiplicative noise.
arXiv Detail & Related papers (2024-08-25T10:28:31Z) - Mutual information chain rules for security proofs robust against device imperfections [0.0]
We analyze quantum cryptography with imperfect devices that leak additional information to an adversary.
We show that these results can be used to handle some device imperfections in a variety of device-dependent and device-independent protocols.
arXiv Detail & Related papers (2024-07-29T19:47:47Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Sample-efficient Learning of Infinite-horizon Average-reward MDPs with General Function Approximation [53.17668583030862]
We study infinite-horizon average-reward Markov decision processes (AMDPs) in the context of general function approximation.
We propose a novel algorithmic framework named Local-fitted Optimization with OPtimism (LOOP)
We show that LOOP achieves a sublinear $tildemathcalO(mathrmpoly(d, mathrmsp(V*)) sqrtTbeta )$ regret, where $d$ and $beta$ correspond to AGEC and log-covering number of the hypothesis class respectively
arXiv Detail & Related papers (2024-04-19T06:24:22Z) - On Entropy Growth in Perturbative Scattering [0.0]
We study the change in subsystem entropy generated by dynamical unitary evolution of a product state in a bipartite system.
Remarkably, for the case of particle scattering, the circuit diagrams corresponding to $n$-Tsallis entropy are the same as the on-shell diagrams.
arXiv Detail & Related papers (2023-04-25T18:00:01Z) - DASHA: Distributed Nonconvex Optimization with Communication
Compression, Optimal Oracle Complexity, and No Client Synchronization [77.34726150561087]
We develop and analyze DASHA: a new family of methods for noneps distributed optimization problems.
Unlike MARINA, the new methods DASHA, DASHA-MVR send compressed vectors only and never synchronize the nodes, which makes them more practical for learning.
arXiv Detail & Related papers (2022-02-02T20:10:40Z) - Bounds on semi-device-independent quantum random number expansion
capabilities [0.0]
It's explicitly proved that the maximum certifiable entropy that can be obtained through this set of protocols is $-logleft[frac12left+frac1sqrt3right]$.
It's also established that certifiable entropy can be generated as soon as dimension witness crosses the classical bound, making the protocol noise-robust and useful in practical applications.
arXiv Detail & Related papers (2021-11-28T08:54:49Z) - Generalized Entropy Regularization or: There's Nothing Special about
Label Smoothing [83.78668073898001]
We introduce a family of entropy regularizers, which includes label smoothing as a special case.
We find that variance in model performance can be explained largely by the resulting entropy of the model.
We advise the use of other entropy regularization methods in its place.
arXiv Detail & Related papers (2020-05-02T12:46:28Z) - From Information Theory Puzzles in Deletion Channels to Deniability in
Quantum Cryptography [0.0]
We first conjecture on the basis of experimental data that the entropy of the posterior is minimized by the constant strings.
We then establish a connection between covert communication and deniability to propose DC-QKE.
We present an efficient coercion-resistant and quantum-secure voting scheme, based on fully homomorphic encryption.
arXiv Detail & Related papers (2020-03-25T22:20:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.