Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers
- URL: http://arxiv.org/abs/2010.08012v1
- Date: Thu, 15 Oct 2020 20:43:17 GMT
- Title: Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers
- Authors: Alex Lamb, Anirudh Goyal, Agnieszka S{\l}owik, Michael Mozer, Philippe
Beaudoin, Yoshua Bengio
- Abstract summary: Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
- Score: 84.57980167400513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feed-forward neural networks consist of a sequence of layers, in which each
layer performs some processing on the information from the previous layer. A
downside to this approach is that each layer (or module, as multiple modules
can operate in parallel) is tasked with processing the entire hidden state,
rather than a particular part of the state which is most relevant for that
module. Methods which only operate on a small number of input variables are an
essential part of most programming languages, and they allow for improved
modularity and code re-usability. Our proposed method, Neural Function Modules
(NFM), aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and
bottom-up feedback is limited to classification problems. The key contribution
of our work is to combine attention, sparsity, top-down and bottom-up feedback,
in a flexible algorithm which, as we show, improves the results in standard
classification, out-of-domain generalization, generative modeling, and learning
representations in the context of reinforcement learning.
Related papers
- Modular Neural Network Approaches for Surgical Image Recognition [0.0]
We introduce and evaluate different architectures of modular learning for Dorsal Capsulo-Scapholunate Septum (DCSS) instability classification.
Our experiments have shown that modular learning improves performances compared to non-modular systems.
In the second part, we present our approach for data labeling and segmentation with self-training applied on shoulder arthroscopy images.
arXiv Detail & Related papers (2023-07-17T22:28:16Z) - Neural Sculpting: Uncovering hierarchically modular task structure in
neural networks through pruning and network analysis [8.080026425139708]
We show that hierarchically modular neural networks offer benefits such as learning efficiency, generalization, multi-task learning, and transfer.
We propose an approach based on iterative unit and edge pruning (during training), combined with network analysis for module detection and hierarchy inference.
arXiv Detail & Related papers (2023-05-28T15:12:32Z) - Modular Deep Learning [120.36599591042908]
Transfer learning has recently become the dominant paradigm of machine learning.
It remains unclear how to develop models that specialise towards multiple tasks without incurring negative interference.
Modular deep learning has emerged as a promising solution to these challenges.
arXiv Detail & Related papers (2023-02-22T18:11:25Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Are Neural Nets Modular? Inspecting Functional Modularity Through
Differentiable Weight Masks [10.0444013205203]
Understanding if and how NNs are modular could provide insights into how to improve them.
Current inspection methods, however, fail to link modules to their functionality.
arXiv Detail & Related papers (2020-10-05T15:04:11Z) - Joint Self-Attention and Scale-Aggregation for Self-Calibrated Deraining
Network [13.628218953897946]
In this paper, we propose an effective algorithm, called JDNet, to solve the single image deraining problem.
By designing the Scale-Aggregation and Self-Attention modules with Self-Calibrated convolution skillfully, the proposed model has better deraining results.
arXiv Detail & Related papers (2020-08-06T17:04:34Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Evolving Normalization-Activation Layers [100.82879448303805]
We develop efficient rejection protocols to quickly filter out candidate layers that do not work well.
Our method leads to the discovery of EvoNorms, a set of new normalization-activation layers with novel, and sometimes surprising structures.
Our experiments show that EvoNorms work well on image classification models including ResNets, MobileNets and EfficientNets.
arXiv Detail & Related papers (2020-04-06T19:52:48Z) - Global Context-Aware Progressive Aggregation Network for Salient Object
Detection [117.943116761278]
We propose a novel network named GCPANet to integrate low-level appearance features, high-level semantic features, and global context features.
We show that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-03-02T04:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.