Cryptographic tests of the python's lunch conjecture
- URL: http://arxiv.org/abs/2411.10527v1
- Date: Fri, 15 Nov 2024 19:01:25 GMT
- Title: Cryptographic tests of the python's lunch conjecture
- Authors: Alex May, Sabrina Pasterski, Chris Waddell, Michelle Xu,
- Abstract summary: We argue that the python's lunch (PL) conjecture has implications for correlation in the boundary CFT.
We study further examples in AdS$_2+1$ including the defect geometry, BTZ black hole, and a geometry with a static end-of-the-world (ETW) brane.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the AdS/CFT correspondence, subregions of the CFT allow for the recovery of subregions of the bulk. The python's lunch (PL) conjecture asserts that in some settings this reconstruction has a large complexity of $\text{poly}(1/G_N)e^{\Delta A_{\text{PL}}/8G_N}$, where $\Delta A_{\text{PL}}$ is an area difference of bulk extremal surfaces. Using tools from cryptography, we argue that the PL conjecture has implications for correlation in the boundary CFT. We find that the mutual information between appropriate CFT subregions is lower bounded linearly by $\Delta A_{\text{PL}}$. Recalling that the mutual information is also computed by bulk extremal surfaces, this gives an easily checked geometrical consequence of the PL conjecture. We consider explicit python's lunch geometries and study when this prediction holds. In an example constructed in vacuum AdS$_{2+1}$, we find there is a tension between the expected behaviour of correlation and a naive statement of the PL conjecture; we discuss how this tension may be resolved through refinements of the conjecture. We study further examples in AdS$_{2+1}$ including the defect geometry, BTZ black hole, and a geometry with a static end-of-the-world (ETW) brane, and find agreement with expected behaviour assuming the PL conjecture.
Related papers
- Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture [22.225213114532533]
We show that the input space curvature of a neural network remains invariant.
We also present experimental results to observe the consequences of GIH.
arXiv Detail & Related papers (2024-10-15T19:46:09Z) - A Max-Flow approach to Random Tensor Networks [0.40964539027092906]
We study the entanglement entropy of a random tensor network (RTN) using tools from free probability theory.
One can think of random tensor networks are specific probabilistic models for tensors having some particular geometry dictated by a graph (or network) structure.
arXiv Detail & Related papers (2024-07-02T18:00:01Z) - Grounding Continuous Representations in Geometry: Equivariant Neural Fields [26.567143650213225]
We propose a novel CNF architecture which uses a geometry-informed cross-attention to condition the NeF on a geometric variable.
We show that this approach induces a steerability property by which both field and latent are grounded in geometry.
We validate these main properties in a range of tasks including classification, segmentation, forecasting, reconstruction and generative modelling.
arXiv Detail & Related papers (2024-06-09T12:16:30Z) - Efficient Sampling on Riemannian Manifolds via Langevin MCMC [51.825900634131486]
We study the task efficiently sampling from a Gibbs distribution $d pi* = eh d vol_g$ over aian manifold $M$ via (geometric) Langevin MCMC.
Our results apply to general settings where $pi*$ can be non exponential and $Mh$ can have negative Ricci curvature.
arXiv Detail & Related papers (2024-02-15T22:59:14Z) - Sharp Rates in Dependent Learning Theory: Avoiding Sample Size Deflation for the Square Loss [33.18537822803389]
We show that whenever the topologies of $L2$ and $Psi_p$ are comparable on our hypothesis class $mathscrF$, $mathscrF$ is a weakly sub-Gaussian class.
Our result holds whether the problem is realizable or not and we refer to this as a emphnear mixing-free rate, since direct dependence on mixing is relegated to an additive higher order term.
arXiv Detail & Related papers (2024-02-08T18:57:42Z) - Information-Theoretic Thresholds for Planted Dense Cycles [52.076657911275525]
We study a random graph model for small-world networks which are ubiquitous in social and biological sciences.
For both detection and recovery of the planted dense cycle, we characterize the information-theoretic thresholds in terms of $n$, $tau$, and an edge-wise signal-to-noise ratio $lambda$.
arXiv Detail & Related papers (2024-02-01T03:39:01Z) - From Complexity to Clarity: Analytical Expressions of Deep Neural Network Weights via Clifford's Geometric Algebra and Convexity [54.01594785269913]
We show that optimal weights of deep ReLU neural networks are given by the wedge product of training samples when trained with standard regularized loss.
The training problem reduces to convex optimization over wedge product features, which encode the geometric structure of the training dataset.
arXiv Detail & Related papers (2023-09-28T15:19:30Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - Planted Bipartite Graph Detection [13.95780443241133]
We consider the task of detecting a hidden bipartite subgraph in a given random graph.
Under the null hypothesis, the graph is a realization of an ErdHosR'enyi random graph over $n$ with edge density $q$.
Under the alternative, there exists a planted $k_mathsfR times k_mathsfL$ bipartite subgraph with edge density $p>q$.
arXiv Detail & Related papers (2023-02-07T18:18:17Z) - Toward random tensor networks and holographic codes in CFT [0.0]
In spherically symmetric states in any dimension and more general states in 2d CFT, this leads to a holographic error-correcting code.
The code is shown to be isometric for light operators outside the horizon, and non-isometric inside.
The transition at the horizon occurs due to a subtle breakdown of the Virasoro identity block approximation in states with a complex interior.
arXiv Detail & Related papers (2023-02-05T18:16:02Z) - The connected wedge theorem and its consequences [0.0]
We prove the $n$-to-$n$ connected wedge theorem, which considers $n$ input and $n$ output locations at the boundary of anally AdS$_2+1$ spacetime.
The proof holds in three bulk dimensions satisfying the null curvature condition and for semiclassical spacetimes satisfying standard conjectures.
It also has consequences for quantum information theory: it reveals one pattern of entanglement which is sufficient for information processing in a particular class of causal networks.
arXiv Detail & Related papers (2022-09-30T18:00:04Z) - Entanglement Renormalization of a $T\bar{T}$-deformed CFT [0.0]
We find a Gaussian approximation to the ground state of a $TbarT$-deformed scalar CFT on the line.
We discuss the non-localities induced by the $TbarT$-deformation at short length scales.
arXiv Detail & Related papers (2022-03-01T09:50:31Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Learning Smooth Neural Functions via Lipschitz Regularization [92.42667575719048]
We introduce a novel regularization designed to encourage smooth latent spaces in neural fields.
Compared with prior Lipschitz regularized networks, ours is computationally fast and can be implemented in four lines of code.
arXiv Detail & Related papers (2022-02-16T21:24:54Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Quantum double aspects of surface code models [77.34726150561087]
We revisit the Kitaev model for fault tolerant quantum computing on a square lattice with underlying quantum double $D(G)$ symmetry.
We show how our constructions generalise to $D(H)$ models based on a finite-dimensional Hopf algebra $H$.
arXiv Detail & Related papers (2021-06-25T17:03:38Z) - Tensor-Train Networks for Learning Predictive Modeling of
Multidimensional Data [0.0]
A promising strategy is based on tensor networks, which have been very successful in physical and chemical applications.
We show that the weights of a multidimensional regression model can be learned by means of tensor networks with the aim of performing a powerful compact representation.
An algorithm based on alternating least squares has been proposed for approximating the weights in TT-format with a reduction of computational power.
arXiv Detail & Related papers (2021-01-22T16:14:38Z) - Average-case Complexity of Teaching Convex Polytopes via Halfspace
Queries [55.28642461328172]
We show that the average-case teaching complexity is $Theta(d)$, which is in sharp contrast to the worst-case teaching complexity of $Theta(n)$.
Our insights allow us to establish a tight bound on the average-case complexity for $phi$-separable dichotomies.
arXiv Detail & Related papers (2020-06-25T19:59:24Z) - Stochastic Bandits with Linear Constraints [69.757694218456]
We study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies.
We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB)
arXiv Detail & Related papers (2020-06-17T22:32:19Z) - Controlled Mather-Thurston theorems [0.0]
The motivation is to lay mathematical foundations for a physical program.
The goal is to find a duality under which curvature terms, such as Maxwell's $F wedge Fast$ and Hilbert's $int R dvol$ are replaced by an action which measures such "distortions"
arXiv Detail & Related papers (2020-05-30T21:53:57Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.