Feedback-Gated Rectified Linear Units
- URL: http://arxiv.org/abs/2301.02610v1
- Date: Fri, 6 Jan 2023 17:14:11 GMT
- Title: Feedback-Gated Rectified Linear Units
- Authors: Marco Kemmerling
- Abstract summary: A biologically inspired feedback mechanism which gates rectified linear units is proposed.
On the MNIST dataset, autoencoders with feedback show faster convergence, better performance, and more robustness to noise compared to their counterparts without feedback.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feedback connections play a prominent role in the human brain but have not
received much attention in artificial neural network research. Here, a
biologically inspired feedback mechanism which gates rectified linear units is
proposed. On the MNIST dataset, autoencoders with feedback show faster
convergence, better performance, and more robustness to noise compared to their
counterparts without feedback. Some benefits, although less pronounced and less
consistent, can be observed when networks with feedback are applied on the
CIFAR-10 dataset.
Related papers
- Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Improving Out-of-Distribution Generalization of Neural Rerankers with
Contextualized Late Interaction [52.63663547523033]
Late interaction, the simplest form of multi-vector, is also helpful to neural rerankers that only use the [] vector to compute the similarity score.
We show that the finding is consistent across different model sizes and first-stage retrievers of diverse natures.
arXiv Detail & Related papers (2023-02-13T18:42:17Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Multi-Agent Feedback Enabled Neural Networks for Intelligent
Communications [28.723523146324002]
In this paper, a novel multi-agent feedback enabled neural network (MAFENN) framework is proposed.
The MAFENN framework is theoretically formulated into a three-player Feedback Stackelberg game, and the game is proved to converge to the Feedback Stackelberg equilibrium.
To verify the MAFENN framework's feasibility in wireless communications, a multi-agent MAFENN based equalizer (MAFENN-E) is developed.
arXiv Detail & Related papers (2022-05-22T05:28:43Z) - Visual Attention Emerges from Recurrent Sparse Reconstruction [82.78753751860603]
We present a new attention formulation built on two prominent features of the human visual attention mechanism: recurrency and sparsity.
We show that self-attention is a special case of VARS with a single-step optimization and no sparsity constraint.
VARS can be readily used as a replacement for self-attention in popular vision transformers, consistently improving their robustness across various benchmarks.
arXiv Detail & Related papers (2022-04-23T00:35:02Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - SalFBNet: Learning Pseudo-Saliency Distribution via Feedback
Convolutional Networks [8.195696498474579]
We propose a feedback-recursive convolutional framework (SalFBNet) for saliency detection.
We create a large-scale Pseudo-Saliency dataset to alleviate the problem of data deficiency in saliency detection.
arXiv Detail & Related papers (2021-12-07T14:39:45Z) - On the role of feedback in visual processing: a predictive coding
perspective [0.6193838300896449]
We consider deep convolutional networks (CNNs) as models of feed-forward visual processing and implement Predictive Coding (PC) dynamics.
We find that the network increasingly relies on top-down predictions as the noise level increases.
In addition, the accuracy of the network implementing PC dynamics significantly increases over time-steps, compared to its equivalent forward network.
arXiv Detail & Related papers (2021-06-08T10:07:23Z) - Recurrent Feedback Improves Recognition of Partially Occluded Objects [1.452875650827562]
We investigate if and how artificial neural networks also benefit from recurrence.
We find that classification accuracy is significantly higher for recurrent models when compared to feedforward models of matched parametric complexity.
arXiv Detail & Related papers (2021-04-21T16:18:34Z) - PoCoNet: Better Speech Enhancement with Frequency-Positional Embeddings,
Semi-Supervised Conversational Data, and Biased Loss [26.851416177670096]
PoCoNet is a convolutional neural network that, with the use of frequency-positional embeddings, is able to more efficiently build frequency-dependent features in the early layers.
A semi-supervised method helps increase the amount of conversational training data by pre-enhancing noisy datasets.
A new loss function biased towards preserving speech quality helps the optimization better match human perceptual opinions on speech quality.
arXiv Detail & Related papers (2020-08-11T01:24:45Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.