TriggerNER: Learning with Entity Triggers as Explanations for Named
Entity Recognition
- URL: http://arxiv.org/abs/2004.07493v4
- Date: Tue, 7 Jul 2020 01:10:11 GMT
- Title: TriggerNER: Learning with Entity Triggers as Explanations for Named
Entity Recognition
- Authors: Bill Yuchen Lin, Dong-Ho Lee, Ming Shen, Ryan Moreno, Xiao Huang,
Prashant Shiralkar, Xiang Ren
- Abstract summary: We introduce "entity triggers," an effective proxy of human explanations for facilitating label-efficient learning of NER models.
We crowd-sourced 14k entity triggers for two well-studied NER datasets.
Our proposed model, Trigger Matching Network, jointly learns trigger representations and soft matching module with self-attention.
- Score: 42.984048204280676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training neural models for named entity recognition (NER) in a new domain
often requires additional human annotations (e.g., tens of thousands of labeled
instances) that are usually expensive and time-consuming to collect. Thus, a
crucial research question is how to obtain supervision in a cost-effective way.
In this paper, we introduce "entity triggers," an effective proxy of human
explanations for facilitating label-efficient learning of NER models. An entity
trigger is defined as a group of words in a sentence that helps to explain why
humans would recognize an entity in the sentence.
We crowd-sourced 14k entity triggers for two well-studied NER datasets. Our
proposed model, Trigger Matching Network, jointly learns trigger
representations and soft matching module with self-attention such that can
generalize to unseen sentences easily for tagging. Our framework is
significantly more cost-effective than the traditional neural NER frameworks.
Experiments show that using only 20% of the trigger-annotated sentences results
in a comparable performance as using 70% of conventional annotated sentences.
Related papers
- In-Context Learning for Few-Shot Nested Named Entity Recognition [53.55310639969833]
We introduce an effective and innovative ICL framework for the setting of few-shot nested NER.
We improve the ICL prompt by devising a novel example demonstration selection mechanism, EnDe retriever.
In EnDe retriever, we employ contrastive learning to perform three types of representation learning, in terms of semantic similarity, boundary similarity, and label similarity.
arXiv Detail & Related papers (2024-02-02T06:57:53Z) - Less than One-shot: Named Entity Recognition via Extremely Weak
Supervision [46.81604901567282]
We study the named entity recognition problem under the extremely weak supervision setting.
We propose a novel method X-NER that can outperform the state-of-the-art one-shot NER methods.
X-NER possesses several notable properties, such as inheriting the cross-lingual abilities of the underlying language models.
arXiv Detail & Related papers (2023-11-06T04:20:42Z) - Named Entity Recognition via Machine Reading Comprehension: A Multi-Task
Learning Approach [50.12455129619845]
Named Entity Recognition (NER) aims to extract and classify entity mentions in the text into pre-defined types.
We propose to incorporate the label dependencies among entity types into a multi-task learning framework for better MRC-based NER.
arXiv Detail & Related papers (2023-09-20T03:15:05Z) - Optimizing Bi-Encoder for Named Entity Recognition via Contrastive
Learning [80.36076044023581]
We present an efficient bi-encoder framework for named entity recognition (NER)
We frame NER as a metric learning problem that maximizes the similarity between the vector representations of an entity mention and its type.
A major challenge to this bi-encoder formulation for NER lies in separating non-entity spans from entity mentions.
arXiv Detail & Related papers (2022-08-30T23:19:04Z) - Trigger-GNN: A Trigger-Based Graph Neural Network for Nested Named
Entity Recognition [5.9049664765234295]
We propose a trigger-based graph neural network (Trigger-GNN) to leverage the nested NER.
It obtains the complementary annotation embeddings through entity trigger encoding and semantic matching.
It helps the model to learn and generalize more efficiently and cost-effectively.
arXiv Detail & Related papers (2022-04-12T04:15:39Z) - Focusing on Potential Named Entities During Active Label Acquisition [0.0]
Named entity recognition (NER) aims to identify mentions of named entities in an unstructured text.
Many domain-specific NER applications still call for a substantial amount of labeled data.
We propose a better data-driven normalization approach to penalize sentences that are too long or too short.
arXiv Detail & Related papers (2021-11-06T09:04:16Z) - Low-Resource Named Entity Recognition Based on Multi-hop Dependency
Trigger [0.0]
This paper presents a simple and effective approach in low-resource named entity recognition (NER) based on multi-hop dependency trigger.
Our main observation is that there often exists trigger which play an important role to recognize the location and type of entity in sentence.
arXiv Detail & Related papers (2021-09-15T07:00:40Z) - AutoTriggER: Label-Efficient and Robust Named Entity Recognition with
Auxiliary Trigger Extraction [54.20039200180071]
We present a novel framework to improve NER performance by automatically generating and leveraging entity triggers''
Our framework leverages post-hoc explanation to generate rationales and strengthens a model's prior knowledge using an embedding technique.
AutoTriggER shows strong label-efficiency, is capable of generalizing to unseen entities, and outperforms the RoBERTa-CRF baseline by nearly 0.5 F1 points on average.
arXiv Detail & Related papers (2021-09-10T08:11:56Z) - A Sequence-to-Set Network for Nested Named Entity Recognition [38.05786148160635]
We propose a novel sequence-to-set neural network for nested NER.
We use a non-autoregressive decoder to predict the final set of entities in one pass.
Experimental results show that our proposed model achieves state-of-the-art on three nested NER corpora.
arXiv Detail & Related papers (2021-05-19T03:10:04Z) - R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic
Matching [58.72111690643359]
We propose a Relation of Relation Learning Network (R2-Net) for sentence semantic matching.
We first employ BERT to encode the input sentences from a global perspective.
Then a CNN-based encoder is designed to capture keywords and phrase information from a local perspective.
To fully leverage labels for better relation information extraction, we introduce a self-supervised relation of relation classification task.
arXiv Detail & Related papers (2020-12-16T13:11:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.