Named Entity Recognition and Relation Extraction using Enhanced Table
Filling by Contextualized Representations
- URL: http://arxiv.org/abs/2010.07522v2
- Date: Thu, 27 Jan 2022 02:36:38 GMT
- Title: Named Entity Recognition and Relation Extraction using Enhanced Table
Filling by Contextualized Representations
- Authors: Youmi Ma, Tatsuya Hiraoka, Naoaki Okazaki
- Abstract summary: The proposed method computes representations for entity mentions and long-range dependencies without complicated hand-crafted features or neural-network architectures.
We also adapt a tensor dot-product to predict relation labels all at once without resorting to history-based predictions or search strategies.
Despite its simplicity, the experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on the CoNLL04 and ACE05 English datasets.
- Score: 14.614028420899409
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, a novel method for extracting named entities and relations
from unstructured text based on the table representation is presented. By using
contextualized word embeddings, the proposed method computes representations
for entity mentions and long-range dependencies without complicated
hand-crafted features or neural-network architectures. We also adapt a tensor
dot-product to predict relation labels all at once without resorting to
history-based predictions or search strategies. These advances significantly
simplify the model and algorithm for the extraction of named entities and
relations. Despite its simplicity, the experimental results demonstrate that
the proposed method outperforms the state-of-the-art methods on the CoNLL04 and
ACE05 English datasets. We also confirm that the proposed method achieves a
comparable performance with the state-of-the-art NER models on the ACE05
datasets when multiple sentences are provided for context aggregation.
Related papers
- Retrieval-Enhanced Named Entity Recognition [1.2187048691454239]
RENER is a technique for named entity recognition using autoregressive language models based on In-Context Learning and information retrieval techniques.
Experimental results show that in the CrossNER collection we achieve state-of-the-art performance with the proposed technique.
arXiv Detail & Related papers (2024-10-17T01:12:48Z) - GraphER: A Structure-aware Text-to-Graph Model for Entity and Relation Extraction [3.579132482505273]
Information extraction is an important task in Natural Language Processing (NLP)
We propose a novel approach to this task by formulating it as graph structure learning (GSL)
This formulation allows for better interaction and structure-informed decisions for entity and relation prediction.
arXiv Detail & Related papers (2024-04-18T20:09:37Z) - Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction [19.019881161010474]
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM)
Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings.
Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way.
arXiv Detail & Related papers (2023-12-18T09:58:19Z) - Relational Extraction on Wikipedia Tables using Convolutional and Memory
Networks [6.200672130699805]
Relation extraction (RE) is the task of extracting relations between entities in text.
We introduce a new model consisting of Convolutional Neural Network (CNN) and Bidirectional-Long Short Term Memory (BiLSTM) network to encode entities.
arXiv Detail & Related papers (2023-07-11T22:36:47Z) - Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis [89.04041100520881]
This research proposes to retrieve textual and visual evidence based on the object, sentence, and whole image.
We develop a novel approach to synthesize the object-level, image-level, and sentence-level information for better reasoning between the same and different modalities.
arXiv Detail & Related papers (2023-05-25T15:26:13Z) - Image Synthesis via Semantic Composition [74.68191130898805]
We present a novel approach to synthesize realistic images based on their semantic layouts.
It hypothesizes that for objects with similar appearance, they share similar representation.
Our method establishes dependencies between regions according to their appearance correlation, yielding both spatially variant and associated representations.
arXiv Detail & Related papers (2021-09-15T02:26:07Z) - D-REX: Dialogue Relation Extraction with Explanations [65.3862263565638]
This work focuses on extracting explanations that indicate that a relation exists while using only partially labeled data.
We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that explains and ranks relations.
We find that about 90% of the time, human annotators prefer D-REX's explanations over a strong BERT-based joint relation extraction and explanation model.
arXiv Detail & Related papers (2021-09-10T22:30:48Z) - Imposing Relation Structure in Language-Model Embeddings Using
Contrastive Learning [30.00047118880045]
We propose a novel contrastive learning framework that trains sentence embeddings to encode the relations in a graph structure.
The resulting relation-aware sentence embeddings achieve state-of-the-art results on the relation extraction task.
arXiv Detail & Related papers (2021-09-02T10:58:27Z) - Learning to Synthesize Data for Semantic Parsing [57.190817162674875]
We propose a generative model which models the composition of programs and maps a program to an utterance.
Due to the simplicity of PCFG and pre-trained BART, our generative model can be efficiently learned from existing data at hand.
We evaluate our method in both in-domain and out-of-domain settings of text-to-Query parsing on the standard benchmarks of GeoQuery and Spider.
arXiv Detail & Related papers (2021-04-12T21:24:02Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z) - Multidirectional Associative Optimization of Function-Specific Word
Representations [86.87082468226387]
We present a neural framework for learning associations between interrelated groups of words.
Our model induces a joint function-specific word vector space, where vectors of e.g. plausible SVO compositions lie close together.
The model retains information about word group membership even in the joint space, and can thereby effectively be applied to a number of tasks reasoning over the SVO structure.
arXiv Detail & Related papers (2020-05-11T17:07:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.