Adversarial Demonstration Learning for Low-resource NER Using Dual Similarity
- URL: http://arxiv.org/abs/2507.15864v1
- Date: Sun, 13 Jul 2025 07:16:08 GMT
- Title: Adversarial Demonstration Learning for Low-resource NER Using Dual Similarity
- Authors: Guowen Yuan, Tien-Hsuan Wu, Lianghao Xia, Ben Kao,
- Abstract summary: We study the problem of named entity recognition based on demonstration learning in low-resource scenarios.<n>Existing methods for selecting demonstration examples rely on semantic similarity.<n>We show that feature similarity can provide significant performance improvement.
- Score: 18.298608083596548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of named entity recognition (NER) based on demonstration learning in low-resource scenarios. We identify two issues in demonstration construction and model training. Firstly, existing methods for selecting demonstration examples primarily rely on semantic similarity; We show that feature similarity can provide significant performance improvement. Secondly, we show that the NER tagger's ability to reference demonstration examples is generally inadequate. We propose a demonstration and training approach that effectively addresses these issues. For the first issue, we propose to select examples by dual similarity, which comprises both semantic similarity and feature similarity. For the second issue, we propose to train an NER model with adversarial demonstration such that the model is forced to refer to the demonstrations when performing the tagging task. We conduct comprehensive experiments in low-resource NER tasks, and the results demonstrate that our method outperforms a range of methods.
Related papers
- PICLe: Pseudo-Annotations for In-Context Learning in Low-Resource Named Entity Detection [56.916656013563355]
In-context learning (ICL) enables Large Language Models to perform tasks using few demonstrations.<n>We propose PICLe, a framework for in-context learning with noisy, pseudo-annotated demonstrations.<n>We evaluate PICLe on five biomedical NED datasets and show that, with zero human annotation, PICLe outperforms ICL in low-resource settings.
arXiv Detail & Related papers (2024-12-16T16:09:35Z) - DemoRank: Selecting Effective Demonstrations for Large Language Models in Ranking Task [24.780407347867943]
This paper explores how to select appropriate in-context demonstrations for the passage ranking task.
We propose a demonstration selection framework DemoRank for ranking task.
arXiv Detail & Related papers (2024-06-24T06:10:13Z) - Demonstration Notebook: Finding the Most Suited In-Context Learning Example from Interactions [8.869100154323643]
We propose a novel prompt engineering workflow built around a novel object called the "demonstration notebook"
This notebook helps identify the most suitable in-context learning example for a question by gathering and reusing information from the LLM's past interactions.
Our experiments show that this approach outperforms all existing methods for automatic demonstration construction and selection.
arXiv Detail & Related papers (2024-06-16T10:02:20Z) - Enhancing Chain of Thought Prompting in Large Language Models via Reasoning Patterns [26.641713417293538]
Chain of Thought (CoT) prompting can encourage language models to engage in logical reasoning.<n>We propose leveraging reasoning patterns to enhance CoT prompting effectiveness.
arXiv Detail & Related papers (2024-04-23T07:50:00Z) - Revisiting Demonstration Selection Strategies in In-Context Learning [66.11652803887284]
Large language models (LLMs) have shown an impressive ability to perform a wide range of tasks using in-context learning (ICL)
In this work, we first revisit the factors contributing to this variance from both data and model aspects, and find that the choice of demonstration is both data- and model-dependent.
We propose a data- and model-dependent demonstration selection method, textbfTopK + ConE, based on the assumption that textitthe performance of a demonstration positively correlates with its contribution to the model's understanding of the test samples.
arXiv Detail & Related papers (2024-01-22T16:25:27Z) - Behavioral Cloning via Search in Embedded Demonstration Dataset [0.15293427903448023]
Behavioural cloning uses a dataset of demonstrations to learn a behavioural policy.
We use latent space to index a demonstration dataset, instantly access similar relevant experiences, and copy behavior from these situations.
Our approach can effectively recover meaningful demonstrations and show human-like behavior of an agent in the Minecraft environment.
arXiv Detail & Related papers (2023-06-15T12:25:41Z) - Skill Disentanglement for Imitation Learning from Suboptimal
Demonstrations [60.241144377865716]
We consider the imitation of sub-optimal demonstrations, with both a small clean demonstration set and a large noisy set.
We propose method by evaluating and imitating at the sub-demonstration level, encoding action primitives of varying quality into different skills.
arXiv Detail & Related papers (2023-06-13T17:24:37Z) - In-Context Demonstration Selection with Cross Entropy Difference [95.21947716378641]
Large language models (LLMs) can use in-context demonstrations to improve performance on zero-shot tasks.
We present a cross-entropy difference (CED) method for selecting in-context demonstrations.
arXiv Detail & Related papers (2023-05-24T05:04:00Z) - Robustness of Demonstration-based Learning Under Limited Data Scenario [54.912936555876826]
Demonstration-based learning has shown great potential in stimulating pretrained language models' ability under limited data scenario.
Why such demonstrations are beneficial for the learning process remains unclear since there is no explicit alignment between the demonstrations and the predictions.
In this paper, we design pathological demonstrations by gradually removing intuitively useful information from the standard ones to take a deep dive of the robustness of demonstration-based sequence labeling.
arXiv Detail & Related papers (2022-10-19T16:15:04Z) - Let Me Check the Examples: Enhancing Demonstration Learning via Explicit
Imitation [9.851250429233634]
Demonstration learning aims to guide the prompt prediction via providing answered demonstrations in the few shot settings.
Existing work onlycorporas the answered examples as demonstrations to the prompt template without any additional operation.
We introduce Imitation DEMOnstration Learning (Imitation-Demo) to strengthen demonstration learning via explicitly imitating human review behaviour.
arXiv Detail & Related papers (2022-08-31T06:59:36Z) - Rethinking the Role of Demonstrations: What Makes In-Context Learning
Work? [112.72413411257662]
Large language models (LMs) are able to in-context learn by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs.
We show that ground truth demonstrations are in fact not required -- randomly replacing labels in the demonstrations barely hurts performance.
We find that other aspects of the demonstrations are the key drivers of end task performance.
arXiv Detail & Related papers (2022-02-25T17:25:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.