A Neural Lambda Calculus: Neurosymbolic AI meets the foundations of
computing and functional programming
- URL: http://arxiv.org/abs/2304.09276v1
- Date: Tue, 18 Apr 2023 20:30:16 GMT
- Title: A Neural Lambda Calculus: Neurosymbolic AI meets the foundations of
computing and functional programming
- Authors: Jo\~ao Flach and Luis C. Lamb
- Abstract summary: We will analyze the ability of neural networks to learn how to execute programs as a whole.
We will introduce the use of integrated neural learning and calculi formalization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the last decades, deep neural networks based-models became the dominant
paradigm in machine learning. Further, the use of artificial neural networks in
symbolic learning has been seen as increasingly relevant recently. To study the
capabilities of neural networks in the symbolic AI domain, researchers have
explored the ability of deep neural networks to learn mathematical
constructions, such as addition and multiplication, logic inference, such as
theorem provers, and even the execution of computer programs. The latter is
known to be too complex a task for neural networks. Therefore, the results were
not always successful, and often required the introduction of biased elements
in the learning process, in addition to restricting the scope of possible
programs to be executed. In this work, we will analyze the ability of neural
networks to learn how to execute programs as a whole. To do so, we propose a
different approach. Instead of using an imperative programming language, with
complex structures, we use the Lambda Calculus ({\lambda}-Calculus), a simple,
but Turing-Complete mathematical formalism, which serves as the basis for
modern functional programming languages and is at the heart of computability
theory. We will introduce the use of integrated neural learning and lambda
calculi formalization. Finally, we explore execution of a program in
{\lambda}-Calculus is based on reductions, we will show that it is enough to
learn how to perform these reductions so that we can execute any program.
Keywords: Machine Learning, Lambda Calculus, Neurosymbolic AI, Neural Networks,
Transformer Model, Sequence-to-Sequence Models, Computational Models
Related papers
- Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - A Survey of Deep Learning for Mathematical Reasoning [71.88150173381153]
We review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade.
Recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning.
arXiv Detail & Related papers (2022-12-20T18:46:16Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - A Robust Learning Rule for Soft-Bounded Memristive Synapses Competitive
with Supervised Learning in Standard Spiking Neural Networks [0.0]
A view in theoretical neuroscience sees the brain as a function-computing device.
Being able to approximate functions is a fundamental axiom to build upon for future brain research.
In this work we apply a novel supervised learning algorithm - based on controlling niobium-doped strontium titanate memristive synapses - to learning non-trivial multidimensional functions.
arXiv Detail & Related papers (2022-04-12T10:21:22Z) - Predictive Coding: Towards a Future of Deep Learning beyond
Backpropagation? [41.58529335439799]
The backpropagation of error algorithm used to train deep neural networks has been fundamental to the successes of deep learning.
Recent work has developed the idea into a general-purpose algorithm able to train neural networks using only local computations.
We show the substantially greater flexibility of predictive coding networks against equivalent deep neural networks.
arXiv Detail & Related papers (2022-02-18T22:57:03Z) - Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs [64.56890245622822]
Neurocoder is an entirely new class of general-purpose conditional computational machines.
It "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs.
We show new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt.
arXiv Detail & Related papers (2020-09-24T01:39:16Z) - Extending Answer Set Programs with Neural Networks [2.512827436728378]
We propose NeurASP -- a simple extension of answer set programs by embracing neural networks.
We show that NeurASP can not only improve the perception accuracy of a pre-trained neural network, but also help to train a neural network better by giving restrictions through logic rules.
arXiv Detail & Related papers (2020-09-22T00:52:30Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Memristors -- from In-memory computing, Deep Learning Acceleration,
Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired
Computing [25.16076541420544]
Machine learning, particularly in the form of deep learning, has driven most of the recent fundamental developments in artificial intelligence.
Deep learning has been successfully applied in areas such as object/pattern recognition, speech and natural language processing, self-driving vehicles, intelligent self-diagnostics tools, autonomous robots, knowledgeable personal assistants, and monitoring.
This paper reviews the case for a novel beyond CMOS hardware technology, memristors, as a potential solution for the implementation of power-efficient in-memory computing, deep learning accelerators, and spiking neural networks.
arXiv Detail & Related papers (2020-04-30T16:49:03Z) - Self learning robot using real-time neural networks [7.347989843033033]
This paper involves research, development and experimental analysis of a neural network implemented on a robot with an arm.
The neural network learns using the algorithms of Gradient Descent and Backpropagation.
Both the implementation and training of the neural network is done locally on the robot on a raspberry pi 3 so that its learning process is completely independent.
arXiv Detail & Related papers (2020-01-06T13:13:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.