A learning problem whose consistency is equivalent to the non-existence
of real-valued measurable cardinals
- URL: http://arxiv.org/abs/2005.01886v1
- Date: Mon, 4 May 2020 23:40:28 GMT
- Title: A learning problem whose consistency is equivalent to the non-existence
of real-valued measurable cardinals
- Authors: Vladimir G. Pestov
- Abstract summary: We show that the $k$-NN classifier is universally consistent in every metric space whose separable subspaces are sigma-finite dimensional.
Results were inspired by an example sketched by C'erou and Guyader in 2006 at an intuitive level of rigour.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that the $k$-nearest neighbour learning rule is universally
consistent in a metric space $X$ if and only if it is universally consistent in
every separable subspace of $X$ and the density of $X$ is less than every
real-measurable cardinal. In particular, the $k$-NN classifier is universally
consistent in every metric space whose separable subspaces are sigma-finite
dimensional in the sense of Nagata and Preiss if and only if there are no
real-valued measurable cardinals. The latter assumption is relatively
consistent with ZFC, however the consistency of the existence of such cardinals
cannot be proved within ZFC. Our results were inspired by an example sketched
by C\'erou and Guyader in 2006 at an intuitive level of rigour.
Related papers
- Entanglement in Typical States of Chern-Simons Theory [0.0]
We compute various averages over bulk geometries of quantum states prepared by the Chern-Simons path integral.
We find that the typical state is unentangled across any bipartition of the tori defining the boundary Hilbert space.
We show that this averaged state is a separable state, which implies that different boundary tori only share classical correlations for complex enough bulk geometries.
arXiv Detail & Related papers (2025-03-27T18:11:55Z) - A Theory of Interpretable Approximations [61.90216959710842]
We study the idea of approximating a target concept $c$ by a small aggregation of concepts from some base class $mathcalH$.
For any given pair of $mathcalH$ and $c$, exactly one of these cases holds: (i) $c$ cannot be approximated by $mathcalH$ with arbitrary accuracy.
We show that, in the case of interpretable approximations, even a slightly nontrivial a-priori guarantee on the complexity of approximations implies approximations with constant (distribution-free and accuracy-
arXiv Detail & Related papers (2024-06-15T06:43:45Z) - Metric Dimension and Resolvability of Jaccard Spaces [49.1574468325115]
A subset of points in a metric space is said to resolve it if each point in the space is uniquely characterized by its distance to each point in the subset.
We show that a much smaller subset of $2X$ suffices to resolve, with high probability, all different pairs of subsets of cardinality at most $sqrt|X|/ln|X|$, up to a factor.
arXiv Detail & Related papers (2024-05-19T02:09:50Z) - Completely entangled subspaces of entanglement depth $k$ [0.0]
We introduce a class of entangled subspaces: completely entangled subspaces of entanglement depth $k$ ($k$-CESs)
We discuss the relation between these subspaces and unextendible product bases (UPBs)
arXiv Detail & Related papers (2023-12-13T19:25:01Z) - Dimension-free Remez Inequalities and norm designs [48.5897526636987]
A class of domains $X$ and test sets $Y$ -- termed emphnorm -- enjoy dimension-free Remez-type estimates.
We show that the supremum of $f$ does not increase by more than $mathcalO(log K)2d$ when $f$ is extended to the polytorus.
arXiv Detail & Related papers (2023-10-11T22:46:09Z) - Universal consistency of the $k$-NN rule in metric spaces and Nagata dimension. II [0.0]
We show that the rule is strongly universally consistent in such spaces in the absence of ties.
One deduces that the $k$-NN rule is universally consistent in metric spaces having finite dimension in the sense of de Groot.
arXiv Detail & Related papers (2023-05-26T22:01:47Z) - Constructions of $k$-uniform states in heterogeneous systems [65.63939256159891]
We present two general methods to construct $k$-uniform states in the heterogeneous systems for general $k$.
We can produce many new $k$-uniform states such that the local dimension of each subsystem can be a prime power.
arXiv Detail & Related papers (2023-05-22T06:58:16Z) - Unextendibility, uncompletability, and many-copy indistinguishable
ensembles [77.34726150561087]
We study unextendibility, uncompletability and analyze their connections to many-copy indistinguishable ensembles.
We report a class of multipartite many-copy indistinguishable ensembles for which local indistinguishability property increases with decreasing mixedness.
arXiv Detail & Related papers (2023-03-30T16:16:41Z) - A Nearly Tight Bound for Fitting an Ellipsoid to Gaussian Random Points [50.90125395570797]
This nearly establishes a conjecture ofciteSaundersonCPW12, within logarithmic factors.
The latter conjecture has attracted significant attention over the past decade, due to its connections to machine learning and sum-of-squares lower bounds for certain statistical problems.
arXiv Detail & Related papers (2022-12-21T17:48:01Z) - Differentially-Private Bayes Consistency [70.92545332158217]
We construct a Bayes consistent learning rule that satisfies differential privacy (DP)
We prove that any VC class can be privately learned in a semi-supervised setting with a near-optimal sample complexity.
arXiv Detail & Related papers (2022-12-08T11:57:30Z) - Mutually unbiased frames [0.0]
The concept of mutually unbiased frames is introduced as the most general notion of unbiasedness for sets composed by linearly independent and normalized vectors.
It encompasses the already existing notions of unbiasedness for orthonormal bases, regular simplices, equiangular tight frames, positive operator valued measure, and also includes symmetric informationally complete quantum measurements.
arXiv Detail & Related papers (2021-10-15T18:04:20Z) - Universal consistency of Wasserstein $k$-NN classifier: Negative and
Positive Results [0.0]
Wasserstein distance provides a notion of dissimilarities between probability measures.
We show that the $k$-NN classifier is not universally consistent on the space of measures supported in $(0,1)$.
arXiv Detail & Related papers (2020-09-10T03:05:05Z) - Metrizing Weak Convergence with Maximum Mean Discrepancies [88.54422104669078]
This paper characterizes the maximum mean discrepancies (MMD) that metrize the weak convergence of probability measures for a wide class of kernels.
We prove that, on a locally compact, non-compact, Hausdorff space, the MMD of a bounded continuous Borel measurable kernel k, metrizes the weak convergence of probability measures if and only if k is continuous.
arXiv Detail & Related papers (2020-06-16T15:49:33Z) - Universal consistency of the $k$-NN rule in metric spaces and Nagata
dimension [0.0]
The $k$ nearest neighbour learning rule is universally consistent in every metric space $X$ that is sigma-finite dimensional in the sense of Nagata.
We show that it is possible to give a direct proof along the same lines as the original theorem of Charles J. Stone (1977) about the universal consistency of the $k$-NN classifier in the finite dimensional Euclidean space.
arXiv Detail & Related papers (2020-02-28T14:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.