The unstable formula theorem revisited via algorithms
- URL: http://arxiv.org/abs/2212.05050v2
- Date: Mon, 17 Apr 2023 18:09:38 GMT
- Title: The unstable formula theorem revisited via algorithms
- Authors: Maryanthe Malliaris, Shay Moran
- Abstract summary: We develop a complete algorithmic analogue of Shelah's celebrated Unstable Formula Theorem.
We describe Littlestone classes via approximations, by analogy to definability of types in model theory.
- Score: 17.736645608595758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is about the surprising interaction of a foundational result from
model theory about stability of theories, which seems to be inherently about
the infinite, with algorithmic stability in learning. Specifically, we develop
a complete algorithmic analogue of Shelah's celebrated Unstable Formula
Theorem, with algorithmic properties taking the place of the infinite. This
draws on several new theorems as well as much recent work. In particular we
introduce a new ``Probably Eventually Correct'' learning model, of independent
interest, and characterize Littlestone (stable) classes in terms of this model;
and we describe Littlestone classes via approximations, by analogy to
definability of types in model theory.
Related papers
- Notes on solvable models of many-body quantum chaos [15.617052284991203]
We study a class of many body chaotic models related to the Brownian Sachdev-Ye-Kitaev model.
An emergent symmetry maps the quantum dynamics into a classical process.
arXiv Detail & Related papers (2024-08-20T18:24:52Z) - A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Proof of the Contiguity Conjecture and Lognormal Limit for the Symmetric
Perceptron [21.356438315715888]
We consider the symmetric binary perceptron model, a simple model of neural networks.
We establish several conjectures for this model.
Our proof technique relies on a dense counter-part of the small graph conditioning method.
arXiv Detail & Related papers (2021-02-25T18:39:08Z) - Why Adversarial Interaction Creates Non-Homogeneous Patterns: A
Pseudo-Reaction-Diffusion Model for Turing Instability [10.933825676518195]
We observe Turing-like patterns in a system of neurons with adversarial interaction.
We present a pseudo-reaction-diffusion model to explain the mechanism that may underlie these phenomena.
arXiv Detail & Related papers (2020-10-01T16:09:22Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Learning CHARME models with neural networks [1.5362025549031046]
We consider a model called CHARME (Conditional Heteroscedastic Autoregressive Mixture of Experts)
As an application, we develop a learning theory for the NN-based autoregressive functions of the model.
arXiv Detail & Related papers (2020-02-08T21:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.