Effectiveness of Deep Networks in NLP using BiDAF as an example
architecture
- URL: http://arxiv.org/abs/2109.00074v1
- Date: Tue, 31 Aug 2021 20:50:18 GMT
- Title: Effectiveness of Deep Networks in NLP using BiDAF as an example
architecture
- Authors: Soumyendu Sarkar
- Abstract summary: I explore the effectiveness of deep networks focussing on the model encoder layer of BiDAF.
I believe the next greatest model in NLP will in fact fold in a solid language modeling like BERT with a composite architecture.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Question Answering with NLP has progressed through the evolution of advanced
model architectures like BERT and BiDAF and earlier word, character, and
context-based embeddings. As BERT has leapfrogged the accuracy of models, an
element of the next frontier can be the introduction of deep networks and an
effective way to train them. In this context, I explored the effectiveness of
deep networks focussing on the model encoder layer of BiDAF. BiDAF with its
heterogeneous layers provides the opportunity not only to explore the
effectiveness of deep networks but also to evaluate whether the refinements
made in lower layers are additive to the refinements made in the upper layers
of the model architecture. I believe the next greatest model in NLP will in
fact fold in a solid language modeling like BERT with a composite architecture
which will bring in refinements in addition to generic language modeling and
will have a more extensive layered architecture. I experimented with the Bypass
network, Residual Highway network, and DenseNet architectures. In addition, I
evaluated the effectiveness of ensembling the last few layers of the network. I
also studied the difference character embeddings make in adding them to the
word embeddings, and whether the effects are additive with deep networks. My
studies indicate that deep networks are in fact effective in giving a boost.
Also, the refinements in the lower layers like embeddings are passed on
additively to the gains made through deep networks.
Related papers
- Informed deep hierarchical classification: a non-standard analysis inspired approach [0.0]
It consists in a multi-output deep neural network equipped with specific projection operators placed before each output layer.
The design of such an architecture, called lexicographic hybrid deep neural network (LH-DNN), has been possible by combining tools from different and quite distant research fields.
To assess the efficacy of the approach, the resulting network is compared against the B-CNN, a convolutional neural network tailored for hierarchical classification tasks.
arXiv Detail & Related papers (2024-09-25T14:12:50Z) - (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork [60.889175951038496]
Large-scale neural networks have demonstrated remarkable performance in different domains like vision and language processing.
One of the key questions of structural pruning is how to estimate the channel significance.
We propose a novel algorithmic framework, namely textttPASS.
It is a tailored hyper-network to take both visual prompts and network weight statistics as input, and output layer-wise channel sparsity in a recurrent manner.
arXiv Detail & Related papers (2024-07-24T16:47:45Z) - Layer-wise Linear Mode Connectivity [52.6945036534469]
Averaging neural network parameters is an intuitive method for the knowledge of two independent models.
It is most prominently used in federated learning.
We analyse the performance of the models that result from averaging single, or groups.
arXiv Detail & Related papers (2023-07-13T09:39:10Z) - How transfer learning impacts linguistic knowledge in deep NLP models? [22.035813865470956]
Deep NLP models learn non-trivial amount of linguistic knowledge, captured at different layers of the model.
We investigate how fine-tuning towards downstream NLP tasks impacts the learned linguistic knowledge.
arXiv Detail & Related papers (2021-05-31T17:43:57Z) - ProgressiveSpinalNet architecture for FC layers [0.0]
In deeplearning models the FC layer has biggest important role for classification of the input based on the learned features from previous layers.
This paper aims to reduce these large numbers of parameters significantly with improved performance.
The motivation is inspired from SpinalNet and other biological architecture.
arXiv Detail & Related papers (2021-03-21T11:54:50Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Improving Neural Network Robustness through Neighborhood Preserving
Layers [0.751016548830037]
We demonstrate a novel neural network architecture which can incorporate such layers and also can be trained efficiently.
We empirically show that our designed network architecture is more robust against state-of-art gradient descent based attacks.
arXiv Detail & Related papers (2021-01-28T01:26:35Z) - A Layer-Wise Information Reinforcement Approach to Improve Learning in
Deep Belief Networks [0.4893345190925178]
This paper proposes the Residual Deep Belief Network, which considers the information reinforcement layer-by-layer to improve the feature extraction and knowledge retaining.
Experiments conducted over three public datasets demonstrate its robustness concerning the task of binary image classification.
arXiv Detail & Related papers (2021-01-17T18:53:18Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - Go Wide, Then Narrow: Efficient Training of Deep Thin Networks [62.26044348366186]
We propose an efficient method to train a deep thin network with a theoretic guarantee.
By training with our method, ResNet50 can outperform ResNet101, and BERT Base can be comparable with BERT Large.
arXiv Detail & Related papers (2020-07-01T23:34:35Z) - Convolutional Networks with Dense Connectivity [59.30634544498946]
We introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion.
For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers.
We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks.
arXiv Detail & Related papers (2020-01-08T06:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.