KGSynNet: A Novel Entity Synonyms Discovery Framework with Knowledge
Graph
- URL: http://arxiv.org/abs/2103.08893v1
- Date: Tue, 16 Mar 2021 07:32:33 GMT
- Title: KGSynNet: A Novel Entity Synonyms Discovery Framework with Knowledge
Graph
- Authors: Yiying Yang, Xi Yin, Haiqin Yang, Xingjian Fei, Hao Peng, Kaijie Zhou,
Kunfeng Lai, and Jianping Shen
- Abstract summary: We propose a novel entity synonyms discovery framework, named emphKGSynNet.
Specifically, we pre-train subword embeddings for mentions and entities using a large-scale domain-specific corpus.
We employ a specifically designed emphfusion gate to adaptively absorb the entities' knowledge information into their semantic features.
- Score: 23.053995137917994
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Entity synonyms discovery is crucial for entity-leveraging applications.
However, existing studies suffer from several critical issues: (1) the input
mentions may be out-of-vocabulary (OOV) and may come from a different semantic
space of the entities; (2) the connection between mentions and entities may be
hidden and cannot be established by surface matching; and (3) some entities
rarely appear due to the long-tail effect. To tackle these challenges, we
facilitate knowledge graphs and propose a novel entity synonyms discovery
framework, named \emph{KGSynNet}. Specifically, we pre-train subword embeddings
for mentions and entities using a large-scale domain-specific corpus while
learning the knowledge embeddings of entities via a joint TransC-TransE model.
More importantly, to obtain a comprehensive representation of entities, we
employ a specifically designed \emph{fusion gate} to adaptively absorb the
entities' knowledge information into their semantic features. We conduct
extensive experiments to demonstrate the effectiveness of our \emph{KGSynNet}
in leveraging the knowledge graph. The experimental results show that the
\emph{KGSynNet} improves the state-of-the-art methods by 14.7\% in terms of
hits@3 in the offline evaluation and outperforms the BERT model by 8.3\% in the
positive feedback rate of an online A/B test on the entity linking module of a
question answering system.
Related papers
- OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via Large Language Model Prompting [49.655711022673046]
OneNet is an innovative framework that utilizes the few-shot learning capabilities of Large Language Models (LLMs) without the need for fine-tuning.
OneNet is structured around three key components prompted by LLMs: (1) an entity reduction processor that simplifies inputs by summarizing and filtering out irrelevant entities, (2) a dual-perspective entity linker that combines contextual cues and prior knowledge for precise entity linking, and (3) an entity consensus judger that employs a unique consistency algorithm to alleviate the hallucination in the entity linking reasoning.
arXiv Detail & Related papers (2024-10-10T02:45:23Z) - Dual Encoder: Exploiting the Potential of Syntactic and Semantic for
Aspect Sentiment Triplet Extraction [19.375196127313348]
Aspect Sentiment Triple Extraction (ASTE) is an emerging task in fine-grained sentiment analysis.
We propose a dual-channel encoder with a BERT channel to capture semantic information, and an enhanced LSTM channel for comprehensive syntactic information capture.
We leverage the synergy of these modules to harness the significant potential of syntactic and semantic information in ASTE tasks.
arXiv Detail & Related papers (2024-02-23T15:07:13Z) - EntailE: Introducing Textual Entailment in Commonsense Knowledge Graph
Completion [54.12709176438264]
Commonsense knowledge graphs (CSKGs) utilize free-form text to represent named entities, short phrases, and events as their nodes.
Current methods leverage semantic similarities to increase the graph density, but the semantic plausibility of the nodes and their relations are under-explored.
We propose to adopt textual entailment to find implicit entailment relations between CSKG nodes, to effectively densify the subgraph connecting nodes within the same conceptual class.
arXiv Detail & Related papers (2024-02-15T02:27:23Z) - KMF: Knowledge-Aware Multi-Faceted Representation Learning for Zero-Shot
Node Classification [75.95647590619929]
Zero-Shot Node Classification (ZNC) has been an emerging and crucial task in graph data analysis.
We propose a Knowledge-Aware Multi-Faceted framework (KMF) that enhances the richness of label semantics.
A novel geometric constraint is developed to alleviate the problem of prototype drift caused by node information aggregation.
arXiv Detail & Related papers (2023-08-15T02:38:08Z) - GE-Blender: Graph-Based Knowledge Enhancement for Blender [3.8841367260456487]
Unseen entities can have a large impact on the dialogue generation task.
We construct a graph by extracting entity nodes in them, enhancing the representation of the context.
We add the named entity tag prediction task to apply the problem that the unseen entity does not exist in the graph.
arXiv Detail & Related papers (2023-01-30T13:00:20Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - Knowledge-Rich Self-Supervised Entity Linking [58.838404666183656]
Knowledge-RIch Self-Supervision ($tt KRISSBERT$) is a universal entity linker for four million UMLS entities.
Our approach subsumes zero-shot and few-shot methods, and can easily incorporate entity descriptions and gold mention labels if available.
Without using any labeled information, our method produces $tt KRISSBERT$, a universal entity linker for four million UMLS entities.
arXiv Detail & Related papers (2021-12-15T05:05:12Z) - EchoEA: Echo Information between Entities and Relations for Entity
Alignment [1.1470070927586016]
We propose a novel framework, Echo Entity Alignment (EchoEA), which leverages self-attention mechanism to spread entity information to relations and echo back to entities.
The experimental results on three real-world cross-lingual datasets are stable at around 96% at hits@1 on average.
arXiv Detail & Related papers (2021-07-07T07:34:21Z) - Improving Entity Linking through Semantic Reinforced Entity Embeddings [16.868791358905916]
We propose a method to inject fine-grained semantic information into entity embeddings to reduce the distinctiveness and facilitate the learning of contextual commonality.
Based on our entity embeddings, we achieved new sate-of-the-art performance on entity linking.
arXiv Detail & Related papers (2021-06-16T00:27:56Z) - KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization
for Relation Extraction [111.74812895391672]
We propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt)
We inject latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words.
arXiv Detail & Related papers (2021-04-15T17:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.