Learning without gradient descent encoded by the dynamics of a
neurobiological model
- URL: http://arxiv.org/abs/2103.08878v1
- Date: Tue, 16 Mar 2021 07:03:04 GMT
- Title: Learning without gradient descent encoded by the dynamics of a
neurobiological model
- Authors: Vivek Kurien George, Vikash Morar, Weiwei Yang, Jonathan Larson, Bryan
Tower, Shweti Mahajan, Arkin Gupta, Christopher White, Gabriel A. Silva
- Abstract summary: We introduce a conceptual approach to machine learning that takes advantage of a neurobiologically derived model of dynamic signaling.
We show that MNIST images can be uniquely encoded and classified by the dynamics of geometric networks with nearly state-of-the-art accuracy in an unsupervised way.
- Score: 7.952666139462592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The success of state-of-the-art machine learning is essentially all based on
different variations of gradient descent algorithms that minimize some version
of a cost or loss function. A fundamental limitation, however, is the need to
train these systems in either supervised or unsupervised ways by exposing them
to typically large numbers of training examples. Here, we introduce a
fundamentally novel conceptual approach to machine learning that takes
advantage of a neurobiologically derived model of dynamic signaling,
constrained by the geometric structure of a network. We show that MNIST images
can be uniquely encoded and classified by the dynamics of geometric networks
with nearly state-of-the-art accuracy in an unsupervised way, and without the
need for any training.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Training morphological neural networks with gradient descent: some theoretical insights [0.40792653193642503]
We investigate the potential and limitations of differentiation based approaches and back-propagation applied to morphological networks.
We provide insights and first theoretical guidelines, in particular regarding learning rates.
arXiv Detail & Related papers (2024-02-05T12:11:15Z) - Backpropagation-free Training of Deep Physical Neural Networks [0.0]
We propose a simple deep neural network architecture augmented by a biologically plausible learning algorithm, referred to as "model-free forward-forward training"
We show that our method outperforms state-of-the-art hardware-aware training methods by improving training speed, decreasing digital computations, and reducing power consumption in physical systems.
arXiv Detail & Related papers (2023-04-20T14:02:49Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Stretched and measured neural predictions of complex network dynamics [2.1024950052120417]
Data-driven approximations of differential equations present a promising alternative to traditional methods for uncovering a model of dynamical systems.
A recently employed machine learning tool for studying dynamics is neural networks, which can be used for data-driven solution finding or discovery of differential equations.
We show that extending the model's generalizability beyond traditional statistical learning theory limits is feasible.
arXiv Detail & Related papers (2023-01-12T09:44:59Z) - Revisit Geophysical Imaging in A New View of Physics-informed Generative
Adversarial Learning [2.12121796606941]
Full waveform inversion produces high-resolution subsurface models.
FWI with least-squares function suffers from many drawbacks such as the local-minima problem.
Recent works relying on partial differential equations and neural networks show promising performance for two-dimensional FWI.
We propose an unsupervised learning paradigm that integrates wave equation with a discriminate network to accurately estimate the physically consistent models.
arXiv Detail & Related papers (2021-09-23T15:54:40Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z) - Physics-based polynomial neural networks for one-shot learning of
dynamical systems from one or a few samples [0.0]
The paper describes practical results on both a simple pendulum and one of the largest worldwide X-ray source.
It is demonstrated in practice that the proposed approach allows recovering complex physics from noisy, limited, and partial observations.
arXiv Detail & Related papers (2020-05-24T09:27:10Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.