Randomly Weighted Neuromodulation in Neural Networks Facilitates
Learning of Manifolds Common Across Tasks
- URL: http://arxiv.org/abs/2401.02437v1
- Date: Fri, 17 Nov 2023 15:22:59 GMT
- Title: Randomly Weighted Neuromodulation in Neural Networks Facilitates
Learning of Manifolds Common Across Tasks
- Authors: Jinyung Hong, Theodore P. Pavlic
- Abstract summary: Geometric Sensitive Hashing functions are neural network models that learn class-specific manifold geometry in supervised learning.
We show that a randomly weighted neural network with a neuromodulation system can realize this function.
- Score: 1.9580473532948401
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Geometric Sensitive Hashing functions, a family of Local Sensitive Hashing
functions, are neural network models that learn class-specific manifold
geometry in supervised learning. However, given a set of supervised learning
tasks, understanding the manifold geometries that can represent each task and
the kinds of relationships between the tasks based on them has received little
attention. We explore a formalization of this question by considering a
generative process where each task is associated with a high-dimensional
manifold, which can be done in brain-like models with neuromodulatory systems.
Following this formulation, we define \emph{Task-specific Geometric Sensitive
Hashing~(T-GSH)} and show that a randomly weighted neural network with a
neuromodulation system can realize this function.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Geometry of Polynomial Neural Networks [3.498371632913735]
We study the expressivity and learning process for neural networks (PNNs) with monomial activation functions.
These theoretical results are accompanied by experiments.
arXiv Detail & Related papers (2024-02-01T19:06:06Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Efficient, probabilistic analysis of combinatorial neural codes [0.0]
neural networks encode inputs in the form of combinations of individual neurons' activities.
These neural codes present a computational challenge due to their high dimensionality and often large volumes of data.
We apply methods previously applied to small examples and apply them to large neural codes generated by experiments.
arXiv Detail & Related papers (2022-10-19T11:58:26Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Neural population geometry: An approach for understanding biological and
artificial neural networks [3.4809730725241605]
We review examples of geometrical approaches providing insight into the function of biological and artificial neural networks.
Neural population geometry has the potential to unify our understanding of structure and function in biological and artificial neural networks.
arXiv Detail & Related papers (2021-04-14T18:10:34Z) - A multi-agent model for growing spiking neural networks [0.0]
This project has explored rules for growing the connections between the neurons in Spiking Neural Networks as a learning mechanism.
Results in a simulation environment showed that for a given set of parameters it is possible to reach topologies that reproduce the tested functions.
This project also opens the door to the usage of techniques like genetic algorithms for obtaining the best suited values for the model parameters.
arXiv Detail & Related papers (2020-09-21T15:11:29Z) - Banach Space Representer Theorems for Neural Networks and Ridge Splines [17.12783792226575]
We develop a variational framework to understand the properties of the functions learned by neural networks fit to data.
We derive a representer theorem showing that finite-width, single-hidden layer neural networks are solutions to inverse problems.
arXiv Detail & Related papers (2020-06-10T02:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.