A Survey on Dynamic Neural Networks for Natural Language Processing
- URL: http://arxiv.org/abs/2202.07101v1
- Date: Tue, 15 Feb 2022 00:13:05 GMT
- Title: A Survey on Dynamic Neural Networks for Natural Language Processing
- Authors: Canwen Xu and Julian McAuley
- Abstract summary: Dynamic neural networks are capable of scaling up neural networks with sub-linear increases in computation and time.
In this survey, we summarize progress of three types of dynamic neural networks in NLP: skimming, mixture of experts, and early exit.
- Score: 13.949219077548687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effectively scaling large Transformer models is a main driver of recent
advances in natural language processing. Dynamic neural networks, as an
emerging research direction, are capable of scaling up neural networks with
sub-linear increases in computation and time by dynamically adjusting their
computational path based on the input. Dynamic neural networks could be a
promising solution to the growing parameter numbers of pretrained language
models, allowing both model pretraining with trillions of parameters and faster
inference on mobile devices. In this survey, we summarize progress of three
types of dynamic neural networks in NLP: skimming, mixture of experts, and
early exit. We also highlight current challenges in dynamic neural networks and
directions for future research.
Related papers
- Peer-to-Peer Learning Dynamics of Wide Neural Networks [10.179711440042123]
We provide an explicit, non-asymptotic characterization of the learning dynamics of wide neural networks trained using popularDGD algorithms.
We validate our analytical results by accurately predicting error and error and for classification tasks.
arXiv Detail & Related papers (2024-09-23T17:57:58Z) - Toward Large-scale Spiking Neural Networks: A Comprehensive Survey and Future Directions [38.20628045367021]
spiking neural networks (SNNs) promise energy-efficient computation with event-driven spikes.
We present a survey of existing methods for developing deep spiking neural networks, with a focus on emerging Spiking Transformers.
arXiv Detail & Related papers (2024-08-19T13:07:48Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - A Survey on Statistical Theory of Deep Learning: Approximation, Training Dynamics, and Generative Models [13.283281356356161]
We review the literature on statistical theories of neural networks from three perspectives.
Results on excess risks for neural networks are reviewed.
Papers that attempt to answer how the neural network finds the solution that can generalize well on unseen data'' are reviewed.
arXiv Detail & Related papers (2024-01-14T02:30:19Z) - Deception Detection from Linguistic and Physiological Data Streams Using Bimodal Convolutional Neural Networks [19.639533220155965]
This paper explores the application of convolutional neural networks for the purpose of multimodal deception detection.
We use a dataset built by interviewing 104 subjects about two topics, with one truthful and one falsified response from each subject about each topic.
arXiv Detail & Related papers (2023-11-18T02:44:33Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Population-coding and Dynamic-neurons improved Spiking Actor Network for
Reinforcement Learning [10.957578424267757]
Spiking Neural Network (SNN) contains a diverse population of spiking neurons, making it naturally powerful on state representation with spatial and temporal information.
We propose a Population-coding and Dynamic-neurons improved Spiking Actor Network (PDSAN) for efficient state representation from two different scales.
Our TD3-PDSAN model achieves better performance than state-of-the-art models on four OpenAI gym benchmark tasks.
arXiv Detail & Related papers (2021-06-15T03:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.