The Generative Programs Framework
- URL: http://arxiv.org/abs/2307.11282v1
- Date: Fri, 21 Jul 2023 00:57:05 GMT
- Title: The Generative Programs Framework
- Authors: Mordecai Waegell, Kelvin J. McQueen, and Emily C. Adlam
- Abstract summary: We argue that any quantitative physical theory can be represented in the form of a generative program.
We suggest that these graphs can be interpreted as encoding relations of ontological priority,' and that ontological priority is a suitable generalisation of causation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently there has been significant interest in using causal modelling
techniques to understand the structure of physical theories. However, the
notion of `causation' is limiting - insisting that a physical theory must
involve causal structure already places significant constraints on the form
that theory may take. Thus in this paper, we aim to set out a more general
structural framework. We argue that any quantitative physical theory can be
represented in the form of a generative program, i.e. a list of instructions
showing how to generate the empirical data; the information-processing
structure associated with this program can be represented by a directed acyclic
graph (DAG). We suggest that these graphs can be interpreted as encoding
relations of `ontological priority,' and that ontological priority is a
suitable generalisation of causation which applies even to theories that don't
have a natural causal structure. We discuss some applications of our framework
to philosophical questions about realism, operationalism, free will, locality
and fine-tuning.
Related papers
- The Foundations of Tokenization: Statistical and Computational Concerns [51.370165245628975]
Tokenization is a critical step in the NLP pipeline.
Despite its recognized importance as a standard representation method in NLP, the theoretical underpinnings of tokenization are not yet fully understood.
The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models.
arXiv Detail & Related papers (2024-07-16T11:12:28Z) - Foundations and Frontiers of Graph Learning Theory [81.39078977407719]
Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures.
Graph Neural Networks (GNNs), i.e. neural network architectures designed for learning graph representations, have become a popular paradigm.
This article provides a comprehensive summary of the theoretical foundations and breakthroughs concerning the approximation and learning behaviors intrinsic to prevalent graph learning models.
arXiv Detail & Related papers (2024-07-03T14:07:41Z) - Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Pregeometry, Formal Language and Constructivist Foundations of Physics [0.0]
We discuss the metaphysics of pregeometric structures, upon which new and existing notions of quantum geometry may find a foundation.
We draw attention to evidence suggesting that the framework of formal language, in particular, homotopy type theory, provides the conceptual building blocks for a theory of pregeometry.
arXiv Detail & Related papers (2023-11-07T13:19:29Z) - Query Structure Modeling for Inductive Logical Reasoning Over Knowledge
Graphs [67.043747188954]
We propose a structure-modeled textual encoding framework for inductive logical reasoning over KGs.
It encodes linearized query structures and entities using pre-trained language models to find answers.
We conduct experiments on two inductive logical reasoning datasets and three transductive datasets.
arXiv Detail & Related papers (2023-05-23T01:25:29Z) - Causal models in string diagrams [0.0]
The framework of causal models provides a principled approach to causal reasoning, applied today across many scientific domains.
We present this framework in the language of string diagrams, interpreted formally using category theory.
We argue and demonstrate that causal reasoning according to the causal model framework is most naturally and intuitively done as diagrammatic reasoning.
arXiv Detail & Related papers (2023-04-15T21:54:48Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Unscrambling the omelette of causation and inference: The framework of
causal-inferential theories [0.0]
We introduce the notion of a causal-inferential theory using a process-theoretic formalism.
Recasting the notions of operational and realist theories in this mold clarifies what a realist account of an experiment offers beyond an operational account.
We argue that if one can identify axioms for a realist causal-inferential theory such that the notions of causation and inference can differ from their conventional (classical) interpretations, then one has the means of defining an intrinsically quantum notion of realism.
arXiv Detail & Related papers (2020-09-07T17:58:22Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - A structure theorem for generalized-noncontextual ontological models [0.0]
We use a process-theoretic framework to prove that every generalized-noncontextual ontological model of a tomographically local operational theory has a surprisingly rigid and simple mathematical structure.
We extend known results concerning the equivalence of different notions of classicality from prepare-measure scenarios to arbitrary compositional scenarios.
arXiv Detail & Related papers (2020-05-14T17:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.