Duality Principle and Biologically Plausible Learning: Connecting the
Representer Theorem and Hebbian Learning
- URL: http://arxiv.org/abs/2309.16687v1
- Date: Wed, 2 Aug 2023 20:21:18 GMT
- Title: Duality Principle and Biologically Plausible Learning: Connecting the
Representer Theorem and Hebbian Learning
- Authors: Yanis Bahroun, Dmitri B. Chklovskii, Anirvan M. Sengupta
- Abstract summary: We argue that the Representer theorem offers the perfect lens to study biologically plausible learning algorithms.
Our work sheds light on the pivotal role of the Representer theorem in advancing our comprehension of neural computation.
- Score: 15.094554860151103
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A normative approach called Similarity Matching was recently introduced for
deriving and understanding the algorithmic basis of neural computation focused
on unsupervised problems. It involves deriving algorithms from computational
objectives and evaluating their compatibility with anatomical and physiological
observations. In particular, it introduces neural architectures by considering
dual alternatives instead of primal formulations of popular models such as PCA.
However, its connection to the Representer theorem remains unexplored. In this
work, we propose to use teachings from this approach to explore supervised
learning algorithms and clarify the notion of Hebbian learning. We examine
regularized supervised learning and elucidate the emergence of neural
architecture and additive versus multiplicative update rules. In this work, we
focus not on developing new algorithms but on showing that the Representer
theorem offers the perfect lens to study biologically plausible learning
algorithms. We argue that many past and current advancements in the field rely
on some form of dual formulation to introduce biological plausibility. In
short, as long as a dual formulation exists, it is possible to derive
biologically plausible algorithms. Our work sheds light on the pivotal role of
the Representer theorem in advancing our comprehension of neural computation.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Towards Biologically Plausible Computing: A Comprehensive Comparison [24.299920289520013]
Backpropagation is a cornerstone algorithm in training neural networks for supervised learning.
The biological plausibility of backpropagation is questioned due to its requirements for weight symmetry, global error computation, and dual-phase training.
In this study, we establish criteria for biological plausibility that a desirable learning algorithm should meet.
arXiv Detail & Related papers (2024-06-23T09:51:20Z) - Reasoning Algorithmically in Graph Neural Networks [1.8130068086063336]
We aim to integrate the structured and rule-based reasoning of algorithms with adaptive learning capabilities of neural networks.
This dissertation provides theoretical and practical contributions to this area of research.
arXiv Detail & Related papers (2024-02-21T12:16:51Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - Applications of the Free Energy Principle to Machine Learning and
Neuroscience [0.0]
We explore and apply methods inspired by the free energy principle to two important areas in machine learning and neuroscience.
We focus on predictive coding, a neurobiologically plausible process theory derived from the free energy principle.
Secondly, we study active inference, a neurobiologically grounded account of action through variational message passing.
Finally, we investigate biologically plausible methods of credit assignment in the brain.
arXiv Detail & Related papers (2021-06-30T22:53:03Z) - A Study of the Mathematics of Deep Learning [1.14219428942199]
"Deep Learning"/"Deep Neural Nets" is a technological marvel that is now increasingly deployed at the cutting-edge of artificial intelligence tasks.
This thesis takes several steps towards building strong theoretical foundations for these new paradigms of deep-learning.
arXiv Detail & Related papers (2021-04-28T22:05:54Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z) - Equilibrium Propagation for Complete Directed Neural Networks [0.0]
Most successful learning algorithm for artificial neural networks, backpropagation, is considered biologically implausible.
We contribute to the topic of biologically plausible neuronal learning by building upon and extending the equilibrium propagation learning framework.
arXiv Detail & Related papers (2020-06-15T22:12:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.