Modeling Fine-grained Information via Knowledge-aware Hierarchical Graph
for Zero-shot Entity Retrieval
- URL: http://arxiv.org/abs/2211.10991v1
- Date: Sun, 20 Nov 2022 14:37:53 GMT
- Title: Modeling Fine-grained Information via Knowledge-aware Hierarchical Graph
for Zero-shot Entity Retrieval
- Authors: Taiqiang Wu, Xingyu Bai, Weigang Guo, Weijie Liu, Siheng Li, Yujiu
Yang
- Abstract summary: We propose GER to capture more fine-grained information as complementary to sentence embeddings.
We learn the fine-grained information about mention/entity by aggregating information from these knowledge units.
Experimental results on popular benchmarks demonstrate that our proposed GER framework performs better than previous state-of-the-art models.
- Score: 11.533614615010643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Zero-shot entity retrieval, aiming to link mentions to candidate entities
under the zero-shot setting, is vital for many tasks in Natural Language
Processing. Most existing methods represent mentions/entities via the sentence
embeddings of corresponding context from the Pre-trained Language Model.
However, we argue that such coarse-grained sentence embeddings can not fully
model the mentions/entities, especially when the attention scores towards
mentions/entities are relatively low. In this work, we propose GER, a
\textbf{G}raph enhanced \textbf{E}ntity \textbf{R}etrieval framework, to
capture more fine-grained information as complementary to sentence embeddings.
We extract the knowledge units from the corresponding context and then
construct a mention/entity centralized graph. Hence, we can learn the
fine-grained information about mention/entity by aggregating information from
these knowledge units. To avoid the graph information bottleneck for the
central mention/entity node, we construct a hierarchical graph and design a
novel Hierarchical Graph Attention Network~(HGAN). Experimental results on
popular benchmarks demonstrate that our proposed GER framework performs better
than previous state-of-the-art models. The code has been available at
https://github.com/wutaiqiang/GER-WSDM2023.
Related papers
- TIGER: Temporally Improved Graph Entity Linker [6.111040278075022]
textbfTIGER: a textbfTemporally textbfImproved textbfGraph textbfEntity Linketextbfr.
We introduce textbfTIGER: a textbfTemporally textbfImproved textbfGraph textbfEntity Linketextbfr.
We enhance the learned representation, making entities
arXiv Detail & Related papers (2024-10-11T09:44:33Z) - Relation Rectification in Diffusion Model [64.84686527988809]
We introduce a novel task termed Relation Rectification, aiming to refine the model to accurately represent a given relationship it initially fails to generate.
We propose an innovative solution utilizing a Heterogeneous Graph Convolutional Network (HGCN)
The lightweight HGCN adjusts the text embeddings generated by the text encoder, ensuring the accurate reflection of the textual relation in the embedding space.
arXiv Detail & Related papers (2024-03-29T15:54:36Z) - Coreference Graph Guidance for Mind-Map Generation [5.289044688419791]
Recently, a state-of-the-art method encodes the sentences of a document sequentially and converts them to a relation graph via sequence-to-graph.
We propose a coreference-guided mind-map generation network (CMGN) to incorporate external structure knowledge.
arXiv Detail & Related papers (2023-12-19T09:39:27Z) - Conversational Semantic Parsing using Dynamic Context Graphs [68.72121830563906]
We consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types.
We focus on models which are capable of interactively mapping user utterances into executable logical forms.
arXiv Detail & Related papers (2023-05-04T16:04:41Z) - Document-level Relation Extraction with Cross-sentence Reasoning Graph [14.106582119686635]
Relation extraction (RE) has recently moved from the sentence-level to document-level.
We propose a novel document-level RE model with a GRaph information Aggregation and Cross-sentence Reasoning network (GRACR)
Experimental results show GRACR achieves excellent performance on two public datasets of document-level RE.
arXiv Detail & Related papers (2023-03-07T14:14:12Z) - Scientific Paper Extractive Summarization Enhanced by Citation Graphs [50.19266650000948]
We focus on leveraging citation graphs to improve scientific paper extractive summarization under different settings.
Preliminary results demonstrate that citation graph is helpful even in a simple unsupervised framework.
Motivated by this, we propose a Graph-based Supervised Summarization model (GSS) to achieve more accurate results on the task when large-scale labeled data are available.
arXiv Detail & Related papers (2022-12-08T11:53:12Z) - Entity Type Prediction Leveraging Graph Walks and Entity Descriptions [4.147346416230273]
textitGRAND is a novel approach for entity typing leveraging different graph walk strategies in RDF2vec together with textual entity descriptions.
The proposed approach outperforms the baseline approaches on the benchmark datasets DBpedia and FIGER for entity typing in KGs for both fine-grained and coarse-grained classes.
arXiv Detail & Related papers (2022-07-28T13:56:55Z) - Coarse-to-Fine Entity Representations for Document-level Relation
Extraction [28.39444850200523]
Document-level Relation Extraction (RE) requires extracting relations expressed within and across sentences.
Recent works show that graph-based methods, usually constructing a document-level graph that captures document-aware interactions, can obtain useful entity representations.
We propose the textbfCoarse-to-textbfFine textbfEntity textbfRepresentation model (textbfCFER) that adopts a coarse-to-fine strategy.
arXiv Detail & Related papers (2020-12-04T10:18:59Z) - Dual ResGCN for Balanced Scene GraphGeneration [106.7828712878278]
We propose a novel model, dubbed textitdual ResGCN, which consists of an object residual graph convolutional network and a relation residual graph convolutional network.
The two networks are complementary to each other. The former captures object-level context information, textiti.e., the connections among objects.
The latter is carefully designed to explicitly capture relation-level context information textiti.e., the connections among relations.
arXiv Detail & Related papers (2020-11-09T07:44:17Z) - Autoregressive Entity Retrieval [55.38027440347138]
Entities are at the center of how we represent and aggregate knowledge.
The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering.
We propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion.
arXiv Detail & Related papers (2020-10-02T10:13:31Z) - Iterative Context-Aware Graph Inference for Visual Dialog [126.016187323249]
We propose a novel Context-Aware Graph (CAG) neural network.
Each node in the graph corresponds to a joint semantic feature, including both object-based (visual) and history-related (textual) context representations.
arXiv Detail & Related papers (2020-04-05T13:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.