Representation in Dynamical Systems
- URL: http://arxiv.org/abs/2105.05714v1
- Date: Wed, 12 May 2021 15:03:03 GMT
- Title: Representation in Dynamical Systems
- Authors: Matthew Hutson
- Abstract summary: The brain is often called a computer and likened to a Turing machine.
This paper argues that it can, although not in the way a digital computer does.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The brain is often called a computer and likened to a Turing machine, in part
because the mind can manipulate discrete symbols such as numbers. But the brain
is a dynamical system, more like a Watt governor than a Turing machine. Can a
dynamical system be said to operate using "representations"? This paper argues
that it can, although not in the way a digital computer does. Instead, it uses
phenomena best described using mathematic concepts such as chaotic attractors
to stand in for aspects of the world.
Related papers
- Representation and Interpretation in Artificial and Natural Computing [0.0]
In the putative natural computing both processes are performed by the same agent.
The mode used by digital computers is the algorithmic one.
For a mode of computing to be more powerful than an algorithmic one, it ought to compute functions lacking an effective algorithm.
arXiv Detail & Related papers (2025-02-14T18:57:29Z) - On a heuristic approach to the description of consciousness as a hypercomplex system state and the possibility of machine consciousness (German edition) [0.0]
This article shows that the inner states of consciousness experienced by every human being have a physical but imaginary hypercomplex basis.
Based on theoretical considerations, it could be possible - as a result of mathematical investigations into a so-called bicomplex algebra - to generate and use hypercomplex system states on machines.
arXiv Detail & Related papers (2024-09-03T17:55:57Z) - Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - How (and Why) to Think that the Brain is Literally a Computer [0.0]
The relationship between brains and computers is often taken to be merely metaphorical.
The relationship between brains and computers is often taken to be merely metaphorical.
arXiv Detail & Related papers (2022-08-24T15:38:10Z) - Encoding Integers and Rationals on Neuromorphic Computers using Virtual
Neuron [0.0]
We present the virtual neuron as an encoding mechanism for integers and rational numbers.
We show that it can perform an addition operation using 23 nJ of energy on average using a mixed-signal memristor-based neuromorphic processor.
arXiv Detail & Related papers (2022-08-15T23:18:26Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - The Logic of Quantum Programs [77.34726150561087]
We present a logical calculus for reasoning about information flow in quantum programs.
In particular we introduce a dynamic logic that is capable of dealing with quantum measurements, unitary evolutions and entanglements in compound quantum systems.
arXiv Detail & Related papers (2021-09-14T16:08:37Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - How the Brain might use Division [0.0]
How does a neural architecture that may organise itself mostly through statistics, know what to do?
One possibility is to extract the problem to something more abstract.
In this paper, the author suggests that the maths question can be answered more easily if the problem is changed into one of symbol manipulation.
arXiv Detail & Related papers (2020-03-11T14:12:45Z) - Reservoir memory machines [79.79659145328856]
We propose reservoir memory machines, which are able to solve some of the benchmark tests for Neural Turing Machines.
Our model can also be seen as an extension of echo state networks with an external memory, enabling arbitrarily long storage without interference.
arXiv Detail & Related papers (2020-02-12T01:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.