The Autodidactic Universe
- URL: http://arxiv.org/abs/2104.03902v2
- Date: Thu, 2 Sep 2021 13:20:52 GMT
- Title: The Autodidactic Universe
- Authors: Stephon Alexander, William J. Cunningham, Jaron Lanier, Lee Smolin,
Stefan Stanojevic, Michael W. Toomey, Dave Wecker
- Abstract summary: We present an approach to cosmology in which the Universe learns its own physical laws.
We discover maps that put each of these matrix models in correspondence with both a gauge/gravity theory and a mathematical model of a learning machine.
We discuss in detail what it means to say that learning takes place in autodidactic systems, where there is no supervision.
- Score: 0.8795040582681388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an approach to cosmology in which the Universe learns its own
physical laws. It does so by exploring a landscape of possible laws, which we
express as a certain class of matrix models. We discover maps that put each of
these matrix models in correspondence with both a gauge/gravity theory and a
mathematical model of a learning machine, such as a deep recurrent, cyclic
neural network. This establishes a correspondence between each solution of the
physical theory and a run of a neural network. This correspondence is not an
equivalence, partly because gauge theories emerge from $N \rightarrow \infty $
limits of the matrix models, whereas the same limits of the neural networks
used here are not well-defined. We discuss in detail what it means to say that
learning takes place in autodidactic systems, where there is no supervision. We
propose that if the neural network model can be said to learn without
supervision, the same can be said for the corresponding physical theory. We
consider other protocols for autodidactic physical systems, such as
optimization of graph variety, subset-replication using self-attention and
look-ahead, geometrogenesis guided by reinforcement learning, structural
learning using renormalization group techniques, and extensions. These
protocols together provide a number of directions in which to explore the
origin of physical laws based on putting machine learning architectures in
correspondence with physical theories.
Related papers
- Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations [52.48094670415497]
We develop a theory of when biologically inspired representations modularise with respect to source variables (sources)
We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise.
Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work.
arXiv Detail & Related papers (2024-10-08T17:41:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Duality Principle and Biologically Plausible Learning: Connecting the
Representer Theorem and Hebbian Learning [15.094554860151103]
We argue that the Representer theorem offers the perfect lens to study biologically plausible learning algorithms.
Our work sheds light on the pivotal role of the Representer theorem in advancing our comprehension of neural computation.
arXiv Detail & Related papers (2023-08-02T20:21:18Z) - A Toy Model of Universality: Reverse Engineering How Networks Learn
Group Operations [0.0]
We study the universality hypothesis by examining how small neural networks learn to implement group composition.
We present a novel algorithm by which neural networks may implement composition for any finite group via mathematical representation theory.
arXiv Detail & Related papers (2023-02-06T18:59:20Z) - Contextuality and inductive bias in quantum machine learning [0.0]
Generalisation in machine learning often relies on the ability to encode structures present in data into an inductive bias of the model class.
We look at quantum contextuality -- a form of nonclassicality with links to computational advantage.
We show how to construct quantum learning models with the associated inductive bias, and show through our toy problem that they outperform their corresponding classical surrogate models.
arXiv Detail & Related papers (2023-02-02T19:07:26Z) - Binary Multi Channel Morphological Neural Network [5.551756485554158]
We introduce a Binary Morphological Neural Network (BiMoNN) built upon the convolutional neural network.
We demonstrate an equivalence between BiMoNNs and morphological operators that we can use to binarize entire networks.
These can learn classical morphological operators and show promising results on a medical imaging application.
arXiv Detail & Related papers (2022-04-19T09:26:11Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Quantum mechanics is *-algebras and tensor networks [1.479413555822768]
We provide a systematic approach to quantum mechanics from an information-theoretic perspective.
Our formulation needs only a single kind of object, so-called positive *-tensors.
We show how various types of models, like real-time evolutions or thermal systems can be translated into *-tensor networks.
arXiv Detail & Related papers (2020-03-17T22:46:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.