Empirical Analysis of Unlabeled Entity Problem in Named Entity
Recognition
- URL: http://arxiv.org/abs/2012.05426v5
- Date: Thu, 18 Mar 2021 06:38:57 GMT
- Title: Empirical Analysis of Unlabeled Entity Problem in Named Entity
Recognition
- Authors: Yangming Li, Lemao Liu, Shuming Shi
- Abstract summary: In many scenarios, named entity recognition models severely suffer from unlabeled entity problem.
We propose a general approach, which can almost eliminate the misguidance brought by unlabeled entities.
Our model is robust to unlabeled entity problem and surpasses prior baselines.
- Score: 47.273602658066196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many scenarios, named entity recognition (NER) models severely suffer from
unlabeled entity problem, where the entities of a sentence may not be fully
annotated. Through empirical studies performed on synthetic datasets, we find
two causes of performance degradation. One is the reduction of annotated
entities and the other is treating unlabeled entities as negative instances.
The first cause has less impact than the second one and can be mitigated by
adopting pretraining language models. The second cause seriously misguides a
model in training and greatly affects its performances. Based on the above
observations, we propose a general approach, which can almost eliminate the
misguidance brought by unlabeled entities. The key idea is to use negative
sampling that, to a large extent, avoids training NER models with unlabeled
entities. Experiments on synthetic datasets and real-world datasets show that
our model is robust to unlabeled entity problem and surpasses prior baselines.
On well-annotated datasets, our model is competitive with the state-of-the-art
method.
Related papers
- Seed-Guided Fine-Grained Entity Typing in Science and Engineering
Domains [51.02035914828596]
We study the task of seed-guided fine-grained entity typing in science and engineering domains.
We propose SEType which first enriches the weak supervision by finding more entities for each seen type from an unlabeled corpus.
It then matches the enriched entities to unlabeled text to get pseudo-labeled samples and trains a textual entailment model that can make inferences for both seen and unseen types.
arXiv Detail & Related papers (2024-01-23T22:36:03Z) - Continual Named Entity Recognition without Catastrophic Forgetting [37.316700599440935]
We introduce a pooled feature distillation loss that skillfully navigates the trade-off between retaining knowledge of old entity types and acquiring new ones.
We develop a confidence-based pseudo-labeling for the non-entity type.
We suggest an adaptive re-weighting type-balanced learning strategy to handle the issue of biased type distribution.
arXiv Detail & Related papers (2023-10-23T03:45:30Z) - Enhancing Low-resource Fine-grained Named Entity Recognition by
Leveraging Coarse-grained Datasets [1.5500145658862499]
$K$-shot learning techniques can be applied, but their performance tends to saturate when the number of annotations exceeds several tens of labels.
We propose a fine-grained NER model with a Fine-to-Coarse(F2C) mapping matrix to leverage the hierarchical structure explicitly.
Our method outperforms both $K$-shot learning and supervised learning methods when dealing with a small number of fine-grained annotations.
arXiv Detail & Related papers (2023-10-18T05:13:34Z) - Ground Truth Inference for Weakly Supervised Entity Matching [76.6732856489872]
We propose a simple but powerful labeling model for weak supervision tasks.
We then tailor the labeling model specifically to the task of entity matching.
We show that our labeling model results in a 9% higher F1 score on average than the best existing method.
arXiv Detail & Related papers (2022-11-13T17:57:07Z) - Recognizing Nested Entities from Flat Supervision: A New NER Subtask,
Feasibility and Challenges [3.614392310669357]
This study proposes a new subtask, nested-from-flat NER, which corresponds to a realistic application scenario.
We train span-based models and deliberately ignore the spans nested inside labeled entities, since these spans are possibly unlabeled entities.
With nested entities removed from the training data, our model achieves 54.8%, 54.2% and 41.1% F1 scores on the subset of spans within entities on ACE 2004, ACE 2005 and GENIA, respectively.
arXiv Detail & Related papers (2022-11-01T06:41:42Z) - A Noise-Robust Loss for Unlabeled Entity Problem in Named Entity
Recognition [9.321777368120658]
We propose a new loss function called NRCES to cope with unlabeled data.
Experiments on synthetic and real-world datasets demonstrate that our approach shows strong robustness in the case of severe unlabeled entity problem.
arXiv Detail & Related papers (2022-08-05T00:02:13Z) - Adversarial Dual-Student with Differentiable Spatial Warping for
Semi-Supervised Semantic Segmentation [70.2166826794421]
We propose a differentiable geometric warping to conduct unsupervised data augmentation.
We also propose a novel adversarial dual-student framework to improve the Mean-Teacher.
Our solution significantly improves the performance and state-of-the-art results are achieved on both datasets.
arXiv Detail & Related papers (2022-03-05T17:36:17Z) - Rethinking Negative Sampling for Unlabeled Entity Problem in Named
Entity Recognition [47.273602658066196]
Unlabeled entities seriously degrade the performances of named entity recognition models.
We analyze why negative sampling succeeds both theoretically and empirically.
We propose a weighted and adaptive sampling distribution for negative sampling.
arXiv Detail & Related papers (2021-08-26T07:02:57Z) - Autoregressive Entity Retrieval [55.38027440347138]
Entities are at the center of how we represent and aggregate knowledge.
The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering.
We propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion.
arXiv Detail & Related papers (2020-10-02T10:13:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.