A Survey on Brain-Inspired Deep Learning via Predictive Coding
- URL: http://arxiv.org/abs/2308.07870v2
- Date: Thu, 23 Jan 2025 16:42:45 GMT
- Title: A Survey on Brain-Inspired Deep Learning via Predictive Coding
- Authors: Tommaso Salvatori, Ankur Mali, Christopher L. Buckley, Thomas Lukasiewicz, Rajesh P. N. Rao, Karl Friston, Alexander Ororbia,
- Abstract summary: Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
- Score: 85.93245078403875
- License:
- Abstract: Artificial intelligence (AI) is rapidly becoming one of the key technologies of this century. The majority of results in AI thus far have been achieved using deep neural networks trained with the error backpropagation learning algorithm. However, the ubiquitous adoption of this approach has highlighted some important limitations such as substantial computational cost, difficulty in quantifying uncertainty, lack of robustness, unreliability, and biological implausibility. It is possible that addressing these limitations may require schemes that are inspired and guided by neuroscience theories. One such theory, called predictive coding (PC), has shown promising performance in machine intelligence tasks, exhibiting exciting properties that make it potentially valuable for the machine learning community: PC can model information processing in different brain areas, can be used in cognitive control and robotics, and has a solid mathematical grounding in variational inference, offering a powerful inversion scheme for a specific class of continuous-state generative models. With the hope of foregrounding research in this direction, we survey the literature that has contributed to this perspective, highlighting the many ways that PC might play a role in the future of machine learning and computational intelligence at large.
Related papers
- Probabilistic Artificial Intelligence [42.59649764999974]
Key aspect of intelligence is to not only make predictions, but reason about the uncertainty in these predictions, and to consider this uncertainty when making decisions.
We discuss the differentiation between "epistemic" uncertainty due to lack of data and "aleatoric" uncertainty, which is irreducible and stems, e.g., from noisy observations and outcomes.
arXiv Detail & Related papers (2025-02-07T14:29:07Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Neuronal Auditory Machine Intelligence (NEURO-AMI) In Perspective [0.0]
We present an overview of a new competing bio-inspired continual learning neural tool Neuronal Auditory Machine Intelligence (Neuro-AMI)
In this report, we present an overview of a new competing bio-inspired continual learning neural tool Neuronal Auditory Machine Intelligence (Neuro-AMI)
arXiv Detail & Related papers (2023-10-14T13:17:58Z) - Reliable AI: Does the Next Generation Require Quantum Computing? [71.84486326350338]
We show that digital hardware is inherently constrained in solving problems about optimization, deep learning, or differential equations.
In contrast, analog computing models, such as the Blum-Shub-Smale machine, exhibit the potential to surmount these limitations.
arXiv Detail & Related papers (2023-07-03T19:10:45Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - From Machine Learning to Robotics: Challenges and Opportunities for
Embodied Intelligence [113.06484656032978]
Article argues that embodied intelligence is a key driver for the advancement of machine learning technology.
We highlight challenges and opportunities specific to embodied intelligence.
We propose research directions which may significantly advance the state-of-the-art in robot learning.
arXiv Detail & Related papers (2021-10-28T16:04:01Z) - Sparse Training Theory for Scalable and Efficient Agents [5.71531053864579]
Deep Neural Networks have proven to cope perfectly with all learning paradigms, i.e. supervised, unsupervised, and reinforcement learning.
Traditional deep learning approaches make use of cloud computing facilities and do not scale well to autonomous agents with low computational resources.
This paper discusses sparse training state-of-the-art, its challenges and limitations while introducing a couple of new theoretical research directions.
arXiv Detail & Related papers (2021-03-02T10:48:29Z) - Next Wave Artificial Intelligence: Robust, Explainable, Adaptable,
Ethical, and Accountable [5.4138734778206]
Deep neural networks have led to many successes and new capabilities in computer vision, speech recognition, language processing, game-playing, and robotics.
A concerning limitation is that even the most successful of today's AI systems suffer from brittleness.
AI systems also can absorb biases-based on gender, race, or other factors-from their training data and further magnify these biases in their subsequent decision-making.
arXiv Detail & Related papers (2020-12-11T00:50:09Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Memristors -- from In-memory computing, Deep Learning Acceleration,
Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired
Computing [25.16076541420544]
Machine learning, particularly in the form of deep learning, has driven most of the recent fundamental developments in artificial intelligence.
Deep learning has been successfully applied in areas such as object/pattern recognition, speech and natural language processing, self-driving vehicles, intelligent self-diagnostics tools, autonomous robots, knowledgeable personal assistants, and monitoring.
This paper reviews the case for a novel beyond CMOS hardware technology, memristors, as a potential solution for the implementation of power-efficient in-memory computing, deep learning accelerators, and spiking neural networks.
arXiv Detail & Related papers (2020-04-30T16:49:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.