How the Brain might use Division
- URL: http://arxiv.org/abs/2003.05320v2
- Date: Tue, 24 Mar 2020 15:08:19 GMT
- Title: How the Brain might use Division
- Authors: Kieran Greer
- Abstract summary: How does a neural architecture that may organise itself mostly through statistics, know what to do?
One possibility is to extract the problem to something more abstract.
In this paper, the author suggests that the maths question can be answered more easily if the problem is changed into one of symbol manipulation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most fundamental questions in Biology or Artificial Intelligence
is how the human brain performs mathematical functions. How does a neural
architecture that may organise itself mostly through statistics, know what to
do? One possibility is to extract the problem to something more abstract. This
becomes clear when thinking about how the brain handles large numbers, for
example to the power of something, when simply summing to an answer is not
feasible. In this paper, the author suggests that the maths question can be
answered more easily if the problem is changed into one of symbol manipulation
and not just number counting. If symbols can be compared and manipulated, maybe
without understanding completely what they are, then the mathematical
operations become relative and some of them might even be rote learned. The
proposed system may also be suggested as an alternative to the traditional
computer binary system. Any of the actual maths still breaks down into binary
operations, while a more symbolic level above that can manipulate the numbers
and reduce the problem size, thus making the binary operations simpler. An
interesting result of looking at this is the possibility of a new fractal
equation resulting from division, that can be used as a measure of good fit and
would help the brain decide how to solve something through self-replacement and
a comparison with this good fit.
Related papers
- Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - Symbolic Equation Solving via Reinforcement Learning [9.361474110798143]
We propose a novel deep-learning interface involving a reinforcement-learning agent that operates a symbolic stack calculator.
By construction, this system is capable of exact transformations and immune to hallucination.
arXiv Detail & Related papers (2024-01-24T13:42:24Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - A Survey of Deep Learning for Mathematical Reasoning [71.88150173381153]
We review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade.
Recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning.
arXiv Detail & Related papers (2022-12-20T18:46:16Z) - Disentanglement with Biological Constraints: A Theory of Functional Cell
Types [20.929056085868613]
This work provides a mathematical understanding of why single neurons in the brain often represent single human-interpretable factors.
It also steps towards an understanding task structure shapes the structure of brain representation.
arXiv Detail & Related papers (2022-09-30T14:27:28Z) - End-to-end Algorithm Synthesis with Recurrent Networks: Logical
Extrapolation Without Overthinking [52.05847268235338]
We show how machine learning systems can perform logical extrapolation without overthinking problems.
We propose a recall architecture that keeps an explicit copy of the problem instance in memory so that it cannot be forgotten.
We also employ a progressive training routine that prevents the model from learning behaviors that are specific to number and instead pushes it to learn behaviors that can be repeated indefinitely.
arXiv Detail & Related papers (2022-02-11T18:43:28Z) - Neuromorphic Computing is Turing-Complete [0.0]
Neuromorphic computing is a non-von Neumann computing paradigm that performs computation by emulating the human brain.
Neuromorphic systems are extremely energy-efficient and known to consume thousands of times less power than CPU and GPU.
We devise neuromorphic circuits for computing all the mu-recursive functions and all the mu-recursive operators.
arXiv Detail & Related papers (2021-04-28T19:25:01Z) - Recognizing and Verifying Mathematical Equations using Multiplicative
Differential Neural Units [86.9207811656179]
We show that memory-augmented neural networks (NNs) can achieve higher-order, memory-augmented extrapolation, stable performance, and faster convergence.
Our models achieve a 1.53% average improvement over current state-of-the-art methods in equation verification and achieve a 2.22% Top-1 average accuracy and 2.96% Top-5 average accuracy for equation completion.
arXiv Detail & Related papers (2021-04-07T03:50:11Z) - Machine Number Sense: A Dataset of Visual Arithmetic Problems for
Abstract and Relational Reasoning [95.18337034090648]
We propose a dataset, Machine Number Sense (MNS), consisting of visual arithmetic problems automatically generated using a grammar model--And-Or Graph (AOG)
These visual arithmetic problems are in the form of geometric figures.
We benchmark the MNS dataset using four predominant neural network models as baselines in this visual reasoning task.
arXiv Detail & Related papers (2020-04-25T17:14:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.