Large Language Models on Lexical Semantic Change Detection: An
Evaluation
- URL: http://arxiv.org/abs/2312.06002v1
- Date: Sun, 10 Dec 2023 21:26:35 GMT
- Title: Large Language Models on Lexical Semantic Change Detection: An
Evaluation
- Authors: Ruiyu Wang, Matthew Choi
- Abstract summary: Lexical Semantic Change Detection is one of the few areas where Large Language Models (LLMs) have not been extensively involved.
Our work presents novel prompting solutions and a comprehensive evaluation that spans all three generations of language models.
- Score: 0.8158530638728501
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Lexical Semantic Change Detection stands out as one of the few areas where
Large Language Models (LLMs) have not been extensively involved. Traditional
methods like PPMI, and SGNS remain prevalent in research, alongside newer
BERT-based approaches. Despite the comprehensive coverage of various natural
language processing domains by LLMs, there is a notable scarcity of literature
concerning their application in this specific realm. In this work, we seek to
bridge this gap by introducing LLMs into the domain of Lexical Semantic Change
Detection. Our work presents novel prompting solutions and a comprehensive
evaluation that spans all three generations of language models, contributing to
the exploration of LLMs in this research area.
Related papers
- A Survey on Large Language Models with Multilingualism: Recent Advances and New Frontiers [48.314619377988436]
The rapid development of Large Language Models (LLMs) demonstrates remarkable multilingual capabilities in natural language processing.
Despite the breakthroughs of LLMs, the investigation into the multilingual scenario remains insufficient.
This survey aims to help the research community address multilingual problems and provide a comprehensive understanding of the core concepts, key techniques, and latest developments in multilingual natural language processing based on LLMs.
arXiv Detail & Related papers (2024-05-17T17:47:39Z) - Analyzing the Role of Semantic Representations in the Era of Large Language Models [104.18157036880287]
We investigate the role of semantic representations in the era of large language models (LLMs)
We propose an AMR-driven chain-of-thought prompting method, which we call AMRCoT.
We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions.
arXiv Detail & Related papers (2024-05-02T17:32:59Z) - SambaLingo: Teaching Large Language Models New Languages [16.709876506515837]
We present a comprehensive investigation into the adaptation of LLMs to new languages.
Our study covers the key components in this process, including vocabulary extension and direct preference optimization.
We scale these experiments across 9 languages and 2 parameter scales.
arXiv Detail & Related papers (2024-04-08T19:48:36Z) - Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers [81.47046536073682]
We present a review and provide a unified perspective to summarize the recent progress as well as emerging trends in multilingual large language models (MLLMs) literature.
We hope our work can provide the community with quick access and spur breakthrough research in MLLMs.
arXiv Detail & Related papers (2024-04-07T11:52:44Z) - Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models [117.20416338476856]
Large language models (LLMs) demonstrate remarkable multilingual capabilities without being pre-trained on specially curated multilingual parallel corpora.
We propose a novel detection method, language activation probability entropy (LAPE), to identify language-specific neurons within LLMs.
Our findings indicate that LLMs' proficiency in processing a particular language is predominantly due to a small subset of neurons.
arXiv Detail & Related papers (2024-02-26T09:36:05Z) - Unveiling Linguistic Regions in Large Language Models [49.298360366468934]
Large Language Models (LLMs) have demonstrated considerable cross-lingual alignment and generalization ability.
This paper conducts several investigations on the linguistic competence of LLMs.
arXiv Detail & Related papers (2024-02-22T16:56:13Z) - Cross-lingual Editing in Multilingual Language Models [1.3062731746155414]
This paper introduces the cross-lingual model editing (textbfXME) paradigm, wherein a fact is edited in one language, and the subsequent update propagation is observed across other languages.
The results reveal notable performance limitations of state-of-the-art METs under the XME setting, mainly when the languages involved belong to two distinct script families.
arXiv Detail & Related papers (2024-01-19T06:54:39Z) - A Comparative Study of Lexical Substitution Approaches based on Neural
Language Models [117.96628873753123]
We present a large-scale comparative study of popular neural language and masked language models.
We show that already competitive results achieved by SOTA LMs/MLMs can be further improved if information about the target word is injected properly.
arXiv Detail & Related papers (2020-05-29T18:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.