Empirical Study of Named Entity Recognition Performance Using
Distribution-aware Word Embedding
- URL: http://arxiv.org/abs/2109.01636v4
- Date: Mon, 22 Jan 2024 01:23:23 GMT
- Title: Empirical Study of Named Entity Recognition Performance Using
Distribution-aware Word Embedding
- Authors: Xin Chen, Qi Zhao, Xinyang Liu
- Abstract summary: We develop a distribution-aware word embedding and implement three different methods to make use of the distribution information in a NER framework.
The performance of NER will be improved if the word specificity is incorporated into existing NER methods.
- Score: 15.955385058787348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the fast development of Deep Learning techniques, Named Entity
Recognition (NER) is becoming more and more important in the information
extraction task. The greatest difficulty that the NER task faces is to keep the
detectability even when types of NE and documents are unfamiliar. Realizing
that the specificity information may contain potential meanings of a word and
generate semantic-related features for word embedding, we develop a
distribution-aware word embedding and implement three different methods to make
use of the distribution information in a NER framework. And the result shows
that the performance of NER will be improved if the word specificity is
incorporated into existing NER methods.
Related papers
- Named Entity Recognition via Machine Reading Comprehension: A Multi-Task
Learning Approach [50.12455129619845]
Named Entity Recognition (NER) aims to extract and classify entity mentions in the text into pre-defined types.
We propose to incorporate the label dependencies among entity types into a multi-task learning framework for better MRC-based NER.
arXiv Detail & Related papers (2023-09-20T03:15:05Z) - IXA/Cogcomp at SemEval-2023 Task 2: Context-enriched Multilingual Named
Entity Recognition using Knowledge Bases [53.054598423181844]
We present a novel NER cascade approach comprising three steps.
We empirically demonstrate the significance of external knowledge bases in accurately classifying fine-grained and emerging entities.
Our system exhibits robust performance in the MultiCoNER2 shared task, even in the low-resource language setting.
arXiv Detail & Related papers (2023-04-20T20:30:34Z) - Dynamic Named Entity Recognition [5.9401550252715865]
We introduce a new task: Dynamic Named Entity Recognition (DNER)
DNER provides a framework to better evaluate the ability of algorithms to extract entities by exploiting the context.
We evaluate baseline models and present experiments reflecting issues and research axes related to this novel task.
arXiv Detail & Related papers (2023-02-16T15:50:02Z) - Nested Named Entity Recognition as Holistic Structure Parsing [92.8397338250383]
This work models the full nested NEs in a sentence as a holistic structure, then we propose a holistic structure parsing algorithm to disclose the entire NEs once for all.
Experiments show that our model yields promising results on widely-used benchmarks which approach or even achieve state-of-the-art.
arXiv Detail & Related papers (2022-04-17T12:48:20Z) - MINER: Improving Out-of-Vocabulary Named Entity Recognition from an
Information Theoretic Perspective [57.19660234992812]
NER model has achieved promising performance on standard NER benchmarks.
Recent studies show that previous approaches may over-rely on entity mention information, resulting in poor performance on out-of-vocabulary (OOV) entity recognition.
We propose MINER, a novel NER learning framework, to remedy this issue from an information-theoretic perspective.
arXiv Detail & Related papers (2022-04-09T05:18:20Z) - DAMO-NLP at SemEval-2022 Task 11: A Knowledge-based System for
Multilingual Named Entity Recognition [94.1865071914727]
MultiCoNER aims at detecting semantically ambiguous named entities in short and low-context settings for multiple languages.
Our team DAMO-NLP proposes a knowledge-based system, where we build a multilingual knowledge base based on Wikipedia.
Given an input sentence, our system effectively retrieves related contexts from the knowledge base.
Our system wins 10 out of 13 tracks in the MultiCoNER shared task.
arXiv Detail & Related papers (2022-03-01T15:29:35Z) - KARL-Trans-NER: Knowledge Aware Representation Learning for Named Entity
Recognition using Transformers [0.0]
We propose a Knowledge Aware Representation Learning (KARL) Network for Named Entity Recognition (NER)
KARL is based on a Transformer that utilizes large knowledge bases represented as fact triplets, converts them to a context, and extracts essential information residing inside to generate contextualized triplet representation for feature augmentation.
Experimental results show that the augmentation done using KARL can considerably boost the performance of our NER system and achieve significantly better results than existing approaches in the literature on three publicly available NER datasets, namely CoNLL 2003, CoNLL++, and OntoNotes v5.
arXiv Detail & Related papers (2021-11-30T14:29:33Z) - Improving Named Entity Recognition with Attentive Ensemble of Syntactic
Information [36.03316058182617]
Named entity recognition (NER) is highly sensitive to sentential syntactic and semantic properties.
In this paper, we improve NER by leveraging different types of syntactic information through attentive ensemble.
Experimental results on six English and Chinese benchmark datasets suggest the effectiveness of the proposed model.
arXiv Detail & Related papers (2020-10-29T10:25:17Z) - ASTRAL: Adversarial Trained LSTM-CNN for Named Entity Recognition [16.43239147870092]
We propose an Adversarial Trained LSTM-CNN (ASTRAL) system to improve the current NER method from both the model structure and the training process.
Our system is evaluated on three benchmarks, CoNLL-03, OntoNotes 5.0, and WNUT-17, achieving state-of-the-art results.
arXiv Detail & Related papers (2020-09-02T13:15:25Z) - Probing Linguistic Features of Sentence-Level Representations in Neural
Relation Extraction [80.38130122127882]
We introduce 14 probing tasks targeting linguistic properties relevant to neural relation extraction (RE)
We use them to study representations learned by more than 40 different encoder architecture and linguistic feature combinations trained on two datasets.
We find that the bias induced by the architecture and the inclusion of linguistic features are clearly expressed in the probing task performance.
arXiv Detail & Related papers (2020-04-17T09:17:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.