Ered: Enhanced Text Representations with Entities and Descriptions
- URL: http://arxiv.org/abs/2208.08954v1
- Date: Thu, 18 Aug 2022 16:51:16 GMT
- Title: Ered: Enhanced Text Representations with Entities and Descriptions
- Authors: Qinghua Zhao, Shuai Ma, Yuxuan Lei
- Abstract summary: External knowledge,e.g., entities and entity descriptions, can help humans understand texts.
This paper aims to explicitly include both entities and entity descriptions in the fine-tuning stage.
We conducted experiments on four knowledge-oriented tasks and two common tasks, and the results achieved new state-of-the-art on several datasets.
- Score: 5.977668609935748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: External knowledge,e.g., entities and entity descriptions, can help humans
understand texts. Many works have been explored to include external knowledge
in the pre-trained models. These methods, generally, design pre-training tasks
and implicitly introduce knowledge by updating model weights, alternatively,
use it straightforwardly together with the original text. Though effective,
there are some limitations. On the one hand, it is implicit and only model
weights are paid attention to, the pre-trained entity embeddings are ignored.
On the other hand, entity descriptions may be lengthy, and inputting into the
model together with the original text may distract the model's attention. This
paper aims to explicitly include both entities and entity descriptions in the
fine-tuning stage. First, the pre-trained entity embeddings are fused with the
original text representation and updated by the backbone model layer by layer.
Second, descriptions are represented by the knowledge module outside the
backbone model, and each knowledge layer is selectively connected to one
backbone layer for fusing. Third, two knowledge-related auxiliary tasks, i.e.,
entity/description enhancement and entity enhancement/pollution task, are
designed to smooth the semantic gaps among evolved representations. We
conducted experiments on four knowledge-oriented tasks and two common tasks,
and the results achieved new state-of-the-art on several datasets. Besides, we
conduct an ablation study to show that each module in our method is necessary.
The code is available at https://github.com/lshowway/Ered.
Related papers
- KETM:A Knowledge-Enhanced Text Matching method [0.0]
We introduce a new model for text matching called the Knowledge Enhanced Text Matching model (KETM)
We use Wiktionary to retrieve the text word definitions as our external knowledge.
We fuse text and knowledge using a gating mechanism to learn the ratio of text and knowledge fusion.
arXiv Detail & Related papers (2023-08-11T17:08:14Z) - Modeling Entities as Semantic Points for Visual Information Extraction
in the Wild [55.91783742370978]
We propose an alternative approach to precisely and robustly extract key information from document images.
We explicitly model entities as semantic points, i.e., center points of entities are enriched with semantic information describing the attributes and relationships of different entities.
The proposed method can achieve significantly enhanced performance on entity labeling and linking, compared with previous state-of-the-art models.
arXiv Detail & Related papers (2023-03-23T08:21:16Z) - Representing Knowledge by Spans: A Knowledge-Enhanced Model for
Information Extraction [7.077412533545456]
We propose a new pre-trained model that learns representations of both entities and relationships simultaneously.
By encoding spans efficiently with span modules, our model can represent both entities and their relationships but requires fewer parameters than existing models.
arXiv Detail & Related papers (2022-08-20T07:32:25Z) - Fashionformer: A simple, Effective and Unified Baseline for Human
Fashion Segmentation and Recognition [80.74495836502919]
In this work, we focus on joint human fashion segmentation and attribute recognition.
We introduce the object query for segmentation and the attribute query for attribute prediction.
For attribute stream, we design a novel Multi-Layer Rendering module to explore more fine-grained features.
arXiv Detail & Related papers (2022-04-10T11:11:10Z) - Syntax-Enhanced Pre-trained Model [49.1659635460369]
We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa.
Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage, so that they suffer from discrepancy between the two stages.
We present a model that utilizes the syntax of text in both pre-training and fine-tuning stages.
arXiv Detail & Related papers (2020-12-28T06:48:04Z) - KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation [100.79870384880333]
We propose a knowledge-grounded pre-training (KGPT) to generate knowledge-enriched text.
We adopt three settings, namely fully-supervised, zero-shot, few-shot to evaluate its effectiveness.
Under zero-shot setting, our model achieves over 30 ROUGE-L on WebNLG while all other baselines fail.
arXiv Detail & Related papers (2020-10-05T19:59:05Z) - Knowledge-Aware Procedural Text Understanding with Multi-Stage Training [110.93934567725826]
We focus on the task of procedural text understanding, which aims to comprehend such documents and track entities' states and locations during a process.
Two challenges, the difficulty of commonsense reasoning and data insufficiency, still remain unsolved.
We propose a novel KnOwledge-Aware proceduraL text understAnding (KOALA) model, which effectively leverages multiple forms of external knowledge.
arXiv Detail & Related papers (2020-09-28T10:28:40Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.