Evolution of Natural Language Processing Technology: Not Just Language
Processing Towards General Purpose AI
- URL: http://arxiv.org/abs/2310.06228v1
- Date: Tue, 10 Oct 2023 00:41:38 GMT
- Title: Evolution of Natural Language Processing Technology: Not Just Language
Processing Towards General Purpose AI
- Authors: Masahiro Yamamoto
- Abstract summary: This report provides a technological explanation of how cutting-edge NLP has made it possible to realize the "practice makes perfect" principle.
Achievements exceeding the initial predictions have been reported from the results of learning vast amounts of textual data using deep learning.
It is an accurate example of the learner embodying the concept of "practice makes perfect" by using vast amounts of textual data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since the invention of computers, communication through natural language
(actual human language) has been a dream technology. However, natural language
is extremely difficult to mathematically formulate, making it difficult to
realize as an algorithm without considering programming. While there have been
numerous technological developments, one cannot say that any results allowing
free utilization have been achieved thus far. In the case of language learning
in humans, for instance when learning one's mother tongue or foreign language,
one must admit that this process is similar to the adage "practice makes
perfect" in principle, even though the learning method is significant up to a
point. Deep learning has played a central role in contemporary AI technology in
recent years. When applied to natural language processing (NLP), this produced
unprecedented results. Achievements exceeding the initial predictions have been
reported from the results of learning vast amounts of textual data using deep
learning. For instance, four arithmetic operations could be performed without
explicit learning, thereby enabling the explanation of complex images and the
generation of images from corresponding explanatory texts. It is an accurate
example of the learner embodying the concept of "practice makes perfect" by
using vast amounts of textual data. This report provides a technological
explanation of how cutting-edge NLP has made it possible to realize the
"practice makes perfect" principle. Additionally, examples of how this can be
applied to business are provided. We reported in June 2022 in Japanese on the
NLP movement from late 2021 to early 2022. We would like to summarize this as a
memorandum since this is just the initial movement leading to the current large
language models (LLMs).
Related papers
- Performance Prediction of Data-Driven Knowledge summarization of High
Entropy Alloys (HEAs) literature implementing Natural Language Processing
algorithms [0.0]
The goal of natural language processing (NLP) is to get a machine intelligence to process words the same way a human brain does.
Five NLP algorithms, namely, Geneism, Sumy, Luhn, Latent Semantic Analysis (LSA), and Kull-back-Liebler (KL) al-gorithm, are implemented.
Luhn algorithm has the highest accuracy score for the knowledge summarization tasks compared to the other used algorithms.
arXiv Detail & Related papers (2023-11-06T16:22:32Z) - Learning to Model the World with Language [100.76069091703505]
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world.
Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future.
We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations.
arXiv Detail & Related papers (2023-07-31T17:57:49Z) - Why can neural language models solve next-word prediction? A
mathematical perspective [53.807657273043446]
We study a class of formal languages that can be used to model real-world examples of English sentences.
Our proof highlights the different roles of the embedding layer and the fully connected component within the neural language model.
arXiv Detail & Related papers (2023-06-20T10:41:23Z) - AI2: The next leap toward native language based and explainable machine
learning framework [1.827510863075184]
The proposed framework, named AI$2$, uses a natural language interface that allows a non-specialist to benefit from machine learning algorithms.
The primary contribution of the AI$2$ framework allows a user to call the machine learning algorithms in English, making its interface usage easier.
Another contribution is a preprocessing module that helps to describe and to load data properly.
arXiv Detail & Related papers (2023-01-09T14:48:35Z) - Robotic Skill Acquisition via Instruction Augmentation with
Vision-Language Models [70.82705830137708]
We introduce Data-driven Instruction Augmentation for Language-conditioned control (DIAL)
We utilize semi-language labels leveraging the semantic understanding of CLIP to propagate knowledge onto large datasets of unlabelled demonstration data.
DIAL enables imitation learning policies to acquire new capabilities and generalize to 60 novel instructions unseen in the original dataset.
arXiv Detail & Related papers (2022-11-21T18:56:00Z) - What is Wrong with Language Models that Can Not Tell a Story? [20.737171876839238]
This paper argues that a deeper understanding of narrative and the successful generation of longer subjectively interesting texts is a vital bottleneck that hinders the progress in modern Natural Language Processing (NLP)
We demonstrate that there are no adequate datasets, evaluation methods, and even operational concepts that could be used to start working on narrative processing.
arXiv Detail & Related papers (2022-11-09T17:24:33Z) - Do As I Can, Not As I Say: Grounding Language in Robotic Affordances [119.29555551279155]
Large language models can encode a wealth of semantic knowledge about the world.
Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language.
We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions.
arXiv Detail & Related papers (2022-04-04T17:57:11Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - Federated Learning Meets Natural Language Processing: A Survey [12.224792145700562]
Federated Learning aims to learn machine learning models from multiple decentralized edge devices (e.g. mobiles) or servers without sacrificing local data privacy.
Recent Natural Language Processing techniques rely on deep learning and large pre-trained language models.
arXiv Detail & Related papers (2021-07-27T05:07:48Z) - Language Conditioned Imitation Learning over Unstructured Data [9.69886122332044]
We present a method for incorporating free-form natural language conditioning into imitation learning.
Our approach learns perception from pixels, natural language understanding, and multitask continuous control end-to-end as a single neural network.
We show this dramatically improves language conditioned performance, while reducing the cost of language annotation to less than 1% of total data.
arXiv Detail & Related papers (2020-05-15T17:08:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.