Anatomy of Neural Language Models
- URL: http://arxiv.org/abs/2401.03797v2
- Date: Tue, 27 Feb 2024 16:35:56 GMT
- Title: Anatomy of Neural Language Models
- Authors: Majd Saleh and St\'ephane Paquelet
- Abstract summary: Transformer-based Language Models (LMs) have led to new state-of-the-art results in a wide spectrum of applications.
Transformers pretrained on language-modeling-like tasks have been widely adopted in computer vision and time series applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The fields of generative AI and transfer learning have experienced remarkable
advancements in recent years especially in the domain of Natural Language
Processing (NLP). Transformers have been at the heart of these advancements
where the cutting-edge transformer-based Language Models (LMs) have led to new
state-of-the-art results in a wide spectrum of applications. While the number
of research works involving neural LMs is exponentially increasing, their vast
majority are high-level and far from self-contained. Consequently, a deep
understanding of the literature in this area is a tough task especially in the
absence of a unified mathematical framework explaining the main types of neural
LMs. We address the aforementioned problem in this tutorial where the objective
is to explain neural LMs in a detailed, simplified and unambiguous mathematical
framework accompanied by clear graphical illustrations. Concrete examples on
widely used models like BERT and GPT2 are explored. Finally, since transformers
pretrained on language-modeling-like tasks have been widely adopted in computer
vision and time series applications, we briefly explore some examples of such
solutions in order to enable readers to understand how transformers work in the
aforementioned domains and compare this use with the original one in NLP.
Related papers
- Hidden Holes: topological aspects of language models [1.1172147007388977]
We study the evolution of topological structure in GPT based large language models across depth and time during training.
We show that the latter exhibit more topological complexity, with a distinct pattern of changes common to all natural languages but absent from synthetically generated data.
arXiv Detail & Related papers (2024-06-09T14:25:09Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - A Survey on Large Language Models from Concept to Implementation [4.219910716090213]
Recent advancements in Large Language Models (LLMs) have broadened the scope of natural language processing (NLP) applications.
This paper investigates the multifaceted applications of these models, with an emphasis on the GPT series.
This exploration focuses on the transformative impact of artificial intelligence (AI) driven tools in revolutionizing traditional tasks like coding and problem-solving.
arXiv Detail & Related papers (2024-03-27T19:35:41Z) - Linear Transformers with Learnable Kernel Functions are Better In-Context Models [3.3865605512957453]
We present an elegant alteration to the Based kernel that amplifies its In-Context Learning abilities.
In our work, we present a singular, elegant alteration to the Based kernel that amplifies its In-Context Learning abilities evaluated with the Multi-Query Associative Recall task.
arXiv Detail & Related papers (2024-02-16T12:44:15Z) - A Comprehensive Survey on Applications of Transformers for Deep Learning
Tasks [60.38369406877899]
Transformer is a deep neural network that employs a self-attention mechanism to comprehend the contextual relationships within sequential data.
transformer models excel in handling long dependencies between input sequence elements and enable parallel processing.
Our survey encompasses the identification of the top five application domains for transformer-based models.
arXiv Detail & Related papers (2023-06-11T23:13:51Z) - A Survey of Large Language Models [81.06947636926638]
Language modeling has been widely studied for language understanding and generation in the past two decades.
Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora.
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
arXiv Detail & Related papers (2023-03-31T17:28:46Z) - Learning to Generalize to More: Continuous Semantic Augmentation for
Neural Machine Translation [50.54059385277964]
We present a novel data augmentation paradigm termed Continuous Semantic Augmentation (CsaNMT)
CsaNMT augments each training instance with an adjacency region that could cover adequate variants of literal expression under the same meaning.
arXiv Detail & Related papers (2022-04-14T08:16:28Z) - Reprogramming Language Models for Molecular Representation Learning [65.00999660425731]
We propose Representation Reprogramming via Dictionary Learning (R2DL) for adversarially reprogramming pretrained language models for molecular learning tasks.
The adversarial program learns a linear transformation between a dense source model input space (language data) and a sparse target model input space (e.g., chemical and biological molecule data) using a k-SVD solver.
R2DL achieves the baseline established by state of the art toxicity prediction models trained on domain-specific data and outperforms the baseline in a limited training-data setting.
arXiv Detail & Related papers (2020-12-07T05:50:27Z) - Learning Source Phrase Representations for Neural Machine Translation [65.94387047871648]
We propose an attentive phrase representation generation mechanism which is able to generate phrase representations from corresponding token representations.
In our experiments, we obtain significant improvements on the WMT 14 English-German and English-French tasks on top of the strong Transformer baseline.
arXiv Detail & Related papers (2020-06-25T13:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.