Deep Neural Networks using a Single Neuron: Folded-in-Time Architecture
using Feedback-Modulated Delay Loops
- URL: http://arxiv.org/abs/2011.10115v2
- Date: Sun, 6 Jun 2021 13:37:50 GMT
- Title: Deep Neural Networks using a Single Neuron: Folded-in-Time Architecture
using Feedback-Modulated Delay Loops
- Authors: Florian Stelzer (1, 2 and 4), Andr\'e R\"ohm (3), Raul Vicente (4),
Ingo Fischer (3), Serhiy Yanchuk (1) ((1) Institute of Mathematics,
Technische Universit\"at Berlin, Germany, (2) Department of Mathematics,
Humboldt-Universit\"at zu Berlin, Germany, (3) Instituto de F\'isica
Interdisciplinar y Sistemas Complejos, IFISC (UIB-CSIC), Spain, (4) Institute
of Computer Science, University of Tartu, Estonia)
- Abstract summary: We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops.
This single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals.
The new method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are among the most widely applied machine learning tools
showing outstanding performance in a broad range of tasks. We present a method
for folding a deep neural network of arbitrary size into a single neuron with
multiple time-delayed feedback loops. This single-neuron deep neural network
comprises only a single nonlinearity and appropriately adjusted modulations of
the feedback signals. The network states emerge in time as a temporal unfolding
of the neuron's dynamics. By adjusting the feedback-modulation within the
loops, we adapt the network's connection weights. These connection weights are
determined via a back-propagation algorithm, where both the delay-induced and
local network connections must be taken into account. Our approach can fully
represent standard Deep Neural Networks (DNN), encompasses sparse DNNs, and
extends the DNN concept toward dynamical systems implementations. The new
method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising
performance in a set of benchmark tasks.
Related papers
- Accelerating SNN Training with Stochastic Parallelizable Spiking Neurons [1.7056768055368383]
Spiking neural networks (SNN) are able to learn features while using less energy, especially on neuromorphic hardware.
Most widely used neuron in deep learning is the temporal and Fire (LIF) neuron.
arXiv Detail & Related papers (2023-06-22T04:25:27Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Deep Convolutional Learning-Aided Detector for Generalized Frequency
Division Multiplexing with Index Modulation [0.0]
The proposed method first pre-processes the received signal by using a zero-forcing (ZF) detector and then uses a neural network consisting of a convolutional neural network (CNN) followed by a fully-connected neural network (FCNN)
The FCNN part uses only two fully-connected layers, which can be adapted to yield a trade-off between complexity and bit error rate (BER) performance.
It has been demonstrated that the proposed deep convolutional neural network-based detection and demodulation scheme provides better BER performance compared to ZF detector with a reasonable complexity increase.
arXiv Detail & Related papers (2022-02-06T22:18:42Z) - Neural Piecewise-Constant Delay Differential Equations [17.55759866368141]
In this article, we introduce a new sort of continuous-depth neural network, called the Neural Piecewise-Constant Delay Differential Equations (PCDDEs)
We show that the Neural PCDDEs do outperform the several existing continuous-depth neural frameworks on the one-dimensional piecewise-constant delay population dynamics and real-world datasets.
arXiv Detail & Related papers (2022-01-04T03:44:15Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Deep Kronecker neural networks: A general framework for neural networks
with adaptive activation functions [4.932130498861987]
We propose a new type of neural networks, Kronecker neural networks (KNNs), that form a general framework for neural networks with adaptive activation functions.
Under suitable conditions, KNNs induce a faster decay of the loss than that by the feed-forward networks.
arXiv Detail & Related papers (2021-05-20T04:54:57Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.