Large Language Models Meet NLP: A Survey
- URL: http://arxiv.org/abs/2405.12819v1
- Date: Tue, 21 May 2024 14:24:01 GMT
- Title: Large Language Models Meet NLP: A Survey
- Authors: Libo Qin, Qiguang Chen, Xiachong Feng, Yang Wu, Yongheng Zhang, Yinghui Li, Min Li, Wanxiang Che, Philip S. Yu,
- Abstract summary: Large language models (LLMs) have shown impressive capabilities in Natural Language Processing (NLP) tasks.
This study aims to address this gap by exploring the following questions.
- Score: 79.74450825763851
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While large language models (LLMs) like ChatGPT have shown impressive capabilities in Natural Language Processing (NLP) tasks, a systematic investigation of their potential in this field remains largely unexplored. This study aims to address this gap by exploring the following questions: (1) How are LLMs currently applied to NLP tasks in the literature? (2) Have traditional NLP tasks already been solved with LLMs? (3) What is the future of the LLMs for NLP? To answer these questions, we take the first step to provide a comprehensive overview of LLMs in NLP. Specifically, we first introduce a unified taxonomy including (1) parameter-frozen application and (2) parameter-tuning application to offer a unified perspective for understanding the current progress of LLMs in NLP. Furthermore, we summarize the new frontiers and the associated challenges, aiming to inspire further groundbreaking advancements. We hope this work offers valuable insights into the {potential and limitations} of LLMs in NLP, while also serving as a practical guide for building effective LLMs in NLP.
Related papers
- A Survey of Prompt Engineering Methods in Large Language Models for Different NLP Tasks [0.0]
Large language models (LLMs) have shown remarkable performance on many different Natural Language Processing (NLP) tasks.
Prompt engineering plays a key role in adding more to the already existing abilities of LLMs to achieve significant performance gains.
This paper summarizes different prompting techniques and club them together based on different NLP tasks that they have been used for.
arXiv Detail & Related papers (2024-07-17T20:23:19Z) - Beyond Generative Artificial Intelligence: Roadmap for Natural Language Generation [0.0]
This paper focuses on the field of Natural Language Processing (NLP) and its subfield Natural Language Generation (NLG)
Within the growing LLM family are the popular GPT-4, Bard and more specifically, tools such as ChatGPT.
This scenario poses new questions about the next steps for NLG and how the field can adapt and evolve to deal with new challenges.
arXiv Detail & Related papers (2024-07-15T09:07:07Z) - Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning [53.6472920229013]
Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks.
LLMs are prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning.
We introduce Q*, a framework for guiding LLMs decoding process with deliberative planning.
arXiv Detail & Related papers (2024-06-20T13:08:09Z) - Using Large Language Models for Natural Language Processing Tasks in Requirements Engineering: A Systematic Guideline [2.6644624823848426]
Large Language Models (LLMs) are the cornerstone in automating Requirements Engineering (RE) tasks.
This chapter aims to furnish readers with essential knowledge about LLMs in its initial segment.
It provides a comprehensive guideline tailored for students, researchers, and practitioners on harnessing LLMs to address their specific objectives.
arXiv Detail & Related papers (2024-02-21T14:00:52Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - A Survey on Prompting Techniques in LLMs [0.0]
Autoregressive Large Language Models have transformed the landscape of Natural Language Processing.
We present a taxonomy of existing literature on prompting techniques and provide a concise survey based on this taxonomy.
We identify some open problems in the realm of prompting in autoregressive LLMs which could serve as a direction for future research.
arXiv Detail & Related papers (2023-11-28T17:56:34Z) - NLPBench: Evaluating Large Language Models on Solving NLP Problems [41.01588131136101]
Large language models (LLMs) have shown promise in enhancing the capabilities of natural language processing (NLP)
We present a unique benchmarking dataset, NLPBench, comprising 378 college-level NLP questions spanning various NLP topics sourced from Yale University's prior final exams.
Our evaluation, centered on LLMs such as GPT-3.5/4, PaLM-2, and LLAMA-2, incorporates advanced prompting strategies like the chain-of-thought (CoT) and tree-of-thought (ToT)
arXiv Detail & Related papers (2023-09-27T13:02:06Z) - Aligning Large Language Models with Human: A Survey [53.6014921995006]
Large Language Models (LLMs) trained on extensive textual corpora have emerged as leading solutions for a broad array of Natural Language Processing (NLP) tasks.
Despite their notable performance, these models are prone to certain limitations such as misunderstanding human instructions, generating potentially biased content, or factually incorrect information.
This survey presents a comprehensive overview of these alignment technologies, including the following aspects.
arXiv Detail & Related papers (2023-07-24T17:44:58Z) - Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond [48.70557995528463]
This guide aims to provide researchers and practitioners with valuable insights and best practices for working with Large Language Models.
We present various use cases and non-use cases to illustrate the practical applications and limitations of LLMs in real-world scenarios.
arXiv Detail & Related papers (2023-04-26T17:52:30Z) - A Survey of Knowledge Enhanced Pre-trained Language Models [78.56931125512295]
We present a comprehensive review of Knowledge Enhanced Pre-trained Language Models (KE-PLMs)
For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG) and rule knowledge.
The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods.
arXiv Detail & Related papers (2022-11-11T04:29:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.