BERT: A Review of Applications in Natural Language Processing and
Understanding
- URL: http://arxiv.org/abs/2103.11943v1
- Date: Mon, 22 Mar 2021 15:34:39 GMT
- Title: BERT: A Review of Applications in Natural Language Processing and
Understanding
- Authors: M. V. Koroteev
- Abstract summary: We describe the application of one of the most popular deep learning-based language models - BERT.
In preparing this review, the data of several dozen original scientific articles published over the past few years, which attracted the most attention in the scientific community, were systematized.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this review, we describe the application of one of the most popular deep
learning-based language models - BERT. The paper describes the mechanism of
operation of this model, the main areas of its application to the tasks of text
analytics, comparisons with similar models in each task, as well as a
description of some proprietary models. In preparing this review, the data of
several dozen original scientific articles published over the past few years,
which attracted the most attention in the scientific community, were
systematized. This survey will be useful to all students and researchers who
want to get acquainted with the latest advances in the field of natural
language text analysis.
Related papers
- A Survey of Small Language Models [104.80308007044634]
Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources.
We present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques.
arXiv Detail & Related papers (2024-10-25T23:52:28Z) - GreekT5: A Series of Greek Sequence-to-Sequence Models for News
Summarization [0.0]
This paper proposes a series of novel TS models for Greek news articles.
The proposed models were thoroughly evaluated on the same dataset against GreekBART.
Our evaluation results reveal that most of the proposed models significantly outperform GreekBART on various evaluation metrics.
arXiv Detail & Related papers (2023-11-13T21:33:12Z) - Algorithmic Ghost in the Research Shell: Large Language Models and
Academic Knowledge Creation in Management Research [0.0]
The paper looks at the role of large language models in academic knowledge creation.
This includes writing, editing, reviewing, dataset creation and curation.
arXiv Detail & Related papers (2023-03-10T14:25:29Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - A Framework for Neural Topic Modeling of Text Corpora [6.340447411058068]
We introduce FAME, an open-source framework enabling an efficient mechanism of extracting and incorporating textual features.
To demonstrate the effectiveness of this library, we conducted experiments on the well-known News-Group dataset.
arXiv Detail & Related papers (2021-08-19T23:32:38Z) - Visually grounded models of spoken language: A survey of datasets,
architectures and evaluation techniques [15.906959137350247]
This survey provides an overview of the evolution of visually grounded models of spoken language over the last 20 years.
We discuss the central research questions addressed, the timeline of developments, and the datasets which enabled much of this work.
arXiv Detail & Related papers (2021-04-27T14:32:22Z) - Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey [54.34370423151014]
This paper surveys the components of modeling approaches relaying task impacts across various generation tasks such as storytelling, summarization, translation etc.
We present an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
arXiv Detail & Related papers (2020-10-14T17:54:42Z) - A Survey on Text Classification: From Shallow to Deep Learning [83.47804123133719]
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
This paper fills the gap by reviewing the state-of-the-art approaches from 1961 to 2021.
We create a taxonomy for text classification according to the text involved and the models used for feature extraction and classification.
arXiv Detail & Related papers (2020-08-02T00:09:03Z) - Deep Learning Based Text Classification: A Comprehensive Review [75.8403533775179]
We provide a review of more than 150 deep learning based models for text classification developed in recent years.
We also provide a summary of more than 40 popular datasets widely used for text classification.
arXiv Detail & Related papers (2020-04-06T02:00:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.