Neuro-Modulated Hebbian Learning for Fully Test-Time Adaptation
- URL: http://arxiv.org/abs/2303.00914v1
- Date: Thu, 2 Mar 2023 02:18:56 GMT
- Title: Neuro-Modulated Hebbian Learning for Fully Test-Time Adaptation
- Authors: Yushun Tang, Ce Zhang, Heng Xu, Shuoshuo Chen, Jie Cheng, Luziwei
Leng, Qinghai Guo, Zhihai He
- Abstract summary: Fully test-time adaptation aims to adapt the network model based on sequential analysis of input samples during the inference stage.
We take inspiration from the biological plausibility learning where the neuron responses are tuned based on a local synapse-change procedure.
We design a soft Hebbian learning process which provides an unsupervised and effective mechanism for online adaptation.
- Score: 22.18972584098911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fully test-time adaptation aims to adapt the network model based on
sequential analysis of input samples during the inference stage to address the
cross-domain performance degradation problem of deep neural networks. We take
inspiration from the biological plausibility learning where the neuron
responses are tuned based on a local synapse-change procedure and activated by
competitive lateral inhibition rules. Based on these feed-forward learning
rules, we design a soft Hebbian learning process which provides an unsupervised
and effective mechanism for online adaptation. We observe that the performance
of this feed-forward Hebbian learning for fully test-time adaptation can be
significantly improved by incorporating a feedback neuro-modulation layer. It
is able to fine-tune the neuron responses based on the external feedback
generated by the error back-propagation from the top inference layers. This
leads to our proposed neuro-modulated Hebbian learning (NHL) method for fully
test-time adaptation. With the unsupervised feed-forward soft Hebbian learning
being combined with a learned neuro-modulator to capture feedback from external
responses, the source model can be effectively adapted during the testing
process. Experimental results on benchmark datasets demonstrate that our
proposed method can significantly improve the adaptation performance of network
models and outperforms existing state-of-the-art methods.
Related papers
- Adapting the Biological SSVEP Response to Artificial Neural Networks [5.4712259563296755]
This paper introduces a novel approach to neuron significance assessment inspired by frequency tagging, a technique from neuroscience.
Experiments conducted with a convolutional neural network for image classification reveal notable harmonics and intermodulations in neuron-specific responses under part-based frequency tagging.
The proposed method holds promise for applications in network pruning, and model interpretability, contributing to the advancement of explainable artificial intelligence.
arXiv Detail & Related papers (2024-11-15T10:02:48Z) - Feedback Favors the Generalization of Neural ODEs [24.342023073252395]
We present feedback neural networks, showing that a feedback loop can flexibly correct the learned latent dynamics of neural ordinary differential equations (neural ODEs)
The feedback neural network is a novel two-DOF neural network, which possesses robust performance in unseen scenarios with no loss of accuracy performance on previous tasks.
arXiv Detail & Related papers (2024-10-14T08:09:45Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Benign Overfitting for Two-layer ReLU Convolutional Neural Networks [60.19739010031304]
We establish algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise.
We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk.
arXiv Detail & Related papers (2023-03-07T18:59:38Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Evolving Neural Selection with Adaptive Regularization [7.298440208725654]
We show a method in which the selection of neurons in deep neural networks evolves, adapting to the difficulty of prediction.
We propose the Adaptive Neural Selection (ANS) framework, which evolves to weigh neurons in a layer to form network variants.
Experimental results show that the proposed method can significantly improve the performance of commonly-used neural network architectures.
arXiv Detail & Related papers (2022-04-04T17:19:52Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Bio-plausible Unsupervised Delay Learning for Extracting Temporal
Features in Spiking Neural Networks [0.548253258922555]
plasticity of the conduction delay between neurons plays a fundamental role in learning.
Understanding the precise adjustment of synaptic delays could help us in developing effective brain-inspired computational models.
arXiv Detail & Related papers (2020-11-18T16:25:32Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.