The NLP Cookbook: Modern Recipes for Transformer based Deep Learning
Architectures
- URL: http://arxiv.org/abs/2104.10640v3
- Date: Sat, 24 Apr 2021 17:31:46 GMT
- Title: The NLP Cookbook: Modern Recipes for Transformer based Deep Learning
Architectures
- Authors: Sushant Singh and Ausif Mahmood
- Abstract summary: Natural Language Processing models have achieved phenomenal success in linguistic and semantic tasks.
Recent NLP architectures have utilized concepts of transfer learning, pruning, quantization, and knowledge distillation to achieve moderate model sizes.
Knowledge Retrievers have been built to extricate explicit data documents from a large corpus of databases with greater efficiency and accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, Natural Language Processing (NLP) models have achieved
phenomenal success in linguistic and semantic tasks like text classification,
machine translation, cognitive dialogue systems, information retrieval via
Natural Language Understanding (NLU), and Natural Language Generation (NLG).
This feat is primarily attributed due to the seminal Transformer architecture,
leading to designs such as BERT, GPT (I, II, III), etc. Although these
large-size models have achieved unprecedented performances, they come at high
computational costs. Consequently, some of the recent NLP architectures have
utilized concepts of transfer learning, pruning, quantization, and knowledge
distillation to achieve moderate model sizes while keeping nearly similar
performances as achieved by their predecessors. Additionally, to mitigate the
data size challenge raised by language models from a knowledge extraction
perspective, Knowledge Retrievers have been built to extricate explicit data
documents from a large corpus of databases with greater efficiency and
accuracy. Recent research has also focused on superior inference by providing
efficient attention to longer input sequences. In this paper, we summarize and
examine the current state-of-the-art (SOTA) NLP models that have been employed
for numerous NLP tasks for optimal performance and efficiency. We provide a
detailed understanding and functioning of the different architectures, a
taxonomy of NLP designs, comparative evaluations, and future directions in NLP.
Related papers
- Evolving Knowledge Distillation with Large Language Models and Active
Learning [46.85430680828938]
Large language models (LLMs) have demonstrated remarkable capabilities across various NLP tasks.
Previous research has attempted to distill the knowledge of LLMs into smaller models by generating annotated data.
We propose EvoKD: Evolving Knowledge Distillation, which leverages the concept of active learning to interactively enhance the process of data generation using large language models.
arXiv Detail & Related papers (2024-03-11T03:55:24Z) - Natural Language Processing for Dialects of a Language: A Survey [56.93337350526933]
State-of-the-art natural language processing (NLP) models are trained on massive training corpora, and report a superlative performance on evaluation datasets.
This survey delves into an important attribute of these datasets: the dialect of a language.
Motivated by the performance degradation of NLP models for dialectic datasets and its implications for the equity of language technologies, we survey past research in NLP for dialects in terms of datasets, and approaches.
arXiv Detail & Related papers (2024-01-11T03:04:38Z) - Surveying the Landscape of Text Summarization with Deep Learning: A
Comprehensive Review [2.4185510826808487]
Deep learning has revolutionized natural language processing (NLP) by enabling the development of models that can learn complex representations of language data.
Deep learning models for NLP typically use large amounts of data to train deep neural networks, allowing them to learn the patterns and relationships in language data.
Applying deep learning to text summarization refers to the use of deep neural networks to perform text summarization tasks.
arXiv Detail & Related papers (2023-10-13T21:24:37Z) - A Cohesive Distillation Architecture for Neural Language Models [0.0]
A recent trend in Natural Language Processing is the exponential growth in Language Model (LM) size.
This study investigates methods for Knowledge Distillation (KD) to provide efficient alternatives to large-scale models.
arXiv Detail & Related papers (2023-01-12T08:01:53Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - Visualizing the Relationship Between Encoded Linguistic Information and
Task Performance [53.223789395577796]
We study the dynamic relationship between the encoded linguistic information and task performance from the viewpoint of Pareto Optimality.
We conduct experiments on two popular NLP tasks, i.e., machine translation and language modeling, and investigate the relationship between several kinds of linguistic information and task performances.
Our empirical findings suggest that some syntactic information is helpful for NLP tasks whereas encoding more syntactic information does not necessarily lead to better performance.
arXiv Detail & Related papers (2022-03-29T19:03:10Z) - A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models [185.08295787309544]
We aim to summarize the current progress of pre-trained language model-based knowledge-enhanced models (PLMKEs)
We present the challenges of PLMKEs based on the discussion regarding the three elements and attempt to provide NLP practitioners with potential directions for further research.
arXiv Detail & Related papers (2022-02-17T17:17:43Z) - Robust Natural Language Processing: Recent Advances, Challenges, and
Future Directions [4.409836695738517]
We present a structured overview of NLP robustness research by summarizing the literature in a systemic way across various dimensions.
We then take a deep-dive into the various dimensions of robustness, across techniques, metrics, embeddings, and benchmarks.
arXiv Detail & Related papers (2022-01-03T17:17:11Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z) - Unsupervised Pre-training with Structured Knowledge for Improving
Natural Language Inference [22.648536283569747]
We propose models that leverage structured knowledge in different components of pre-trained models.
Our results show that the proposed models perform better than previous BERT-based state-of-the-art models.
arXiv Detail & Related papers (2021-09-08T21:28:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.