The Compositional Structure of Bayesian Inference
- URL: http://arxiv.org/abs/2305.06112v2
- Date: Thu, 20 Jul 2023 09:13:06 GMT
- Title: The Compositional Structure of Bayesian Inference
- Authors: Dylan Braithwaite, Jules Hedges, Toby St Clere Smithe
- Abstract summary: Bayes' rule tells us how to invert a causal process in order to update our beliefs in light of new evidence.
We study the structure of this compositional rule, noting that it relates to the lens pattern in functional programming.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayes' rule tells us how to invert a causal process in order to update our
beliefs in light of new evidence. If the process is believed to have a complex
compositional structure, we may observe that the inversion of the whole can be
computed piecewise in terms of the component processes. We study the structure
of this compositional rule, noting that it relates to the lens pattern in
functional programming. Working in a suitably general axiomatic presentation of
a category of Markov kernels, we see how we can think of Bayesian inversion as
a particular instance of a state-dependent morphism in a fibred category. We
discuss the compositional nature of this, formulated as a functor on the
underlying category and explore how this can used for a more type-driven
approach to statistical inference.
Related papers
- Compositional Structures in Neural Embedding and Interaction Decompositions [101.40245125955306]
We describe a basic correspondence between linear algebraic structures within vector embeddings in artificial neural networks.
We introduce a characterization of compositional structures in terms of "interaction decompositions"
We establish necessary and sufficient conditions for the presence of such structures within the representations of a model.
arXiv Detail & Related papers (2024-07-12T02:39:50Z) - What makes Models Compositional? A Theoretical View: With Supplement [60.284698521569936]
We propose a general neuro-symbolic definition of compositional functions and their compositional complexity.
We show how various existing general and special purpose sequence processing models fit this definition and use it to analyze their compositional complexity.
arXiv Detail & Related papers (2024-05-02T20:10:27Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - HiPerformer: Hierarchically Permutation-Equivariant Transformer for Time
Series Forecasting [56.95572957863576]
We propose a hierarchically permutation-equivariant model that considers both the relationship among components in the same group and the relationship among groups.
The experiments conducted on real-world data demonstrate that the proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2023-05-14T05:11:52Z) - Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models [110.00434385712786]
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs)
We first present a framework for understanding compositional structures from a geometric perspective.
We then explain what these structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice.
arXiv Detail & Related papers (2023-02-28T08:11:56Z) - Mathematical Foundations for a Compositional Account of the Bayesian
Brain [0.0]
We use the tools of contemporary applied category theory to supply functorial semantics for approximate inference.
We define fibrations of statistical games and classify various problems of statistical inference as corresponding sections.
We construct functors which explain the compositional structure of predictive coding neural circuits under the free energy principle.
arXiv Detail & Related papers (2022-12-23T18:58:17Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Contextuality in the Bundle Approach, n-Contextuality, and the Role of Holonomy [0.0]
Contextuality can be understood as the impossibility to construct a globally consistent description of a model even if there is local agreement.
We can describe contextuality with the bundle approach, where the scenario is represented as a simplicial complex.
We introduce a hierarchy called n-contextuality to explore the dependence of contextual behavior of a model to the topology of the scenario.
arXiv Detail & Related papers (2021-05-28T22:54:05Z) - Categorical Stochastic Processes and Likelihood [1.14219428942199]
We take a Category Theoretic perspective on the relationship between probabilistic modeling and function approximation.
We show how these extensions relate to the category Stoch and other Markov Categories.
We conclude with a demonstration of how the Maximum Likelihood Estimation procedure defines an identity-on-objects functor from the category of statistical models to the category of learners.
arXiv Detail & Related papers (2020-05-10T18:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.