Contrastive Knowledge Graph Error Detection
- URL: http://arxiv.org/abs/2211.10030v1
- Date: Fri, 18 Nov 2022 05:01:19 GMT
- Title: Contrastive Knowledge Graph Error Detection
- Authors: Qinggang Zhang, Junnan Dong, Keyu Duan, Xiao Huang, Yezi Liu, Linchuan
Xu
- Abstract summary: We propose a novel framework - ContrAstive knowledge Graph Error Detection (CAGED)
CAGED introduces contrastive learning into KG learning and provides a novel way of modeling KG.
It outperforms state-of-the-art methods in KG error detection.
- Score: 11.637359888052014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graph (KG) errors introduce non-negligible noise, severely
affecting KG-related downstream tasks. Detecting errors in KGs is challenging
since the patterns of errors are unknown and diverse, while ground-truth labels
are rare or even unavailable. A traditional solution is to construct logical
rules to verify triples, but it is not generalizable since different KGs have
distinct rules with domain knowledge involved. Recent studies focus on
designing tailored detectors or ranking triples based on KG embedding loss.
However, they all rely on negative samples for training, which are generated by
randomly replacing the head or tail entity of existing triples. Such a negative
sampling strategy is not enough for prototyping practical KG errors, e.g.,
(Bruce_Lee, place_of_birth, China), in which the three elements are often
relevant, although mismatched. We desire a more effective unsupervised learning
mechanism tailored for KG error detection. To this end, we propose a novel
framework - ContrAstive knowledge Graph Error Detection (CAGED). It introduces
contrastive learning into KG learning and provides a novel way of modeling KG.
Instead of following the traditional setting, i.e., considering entities as
nodes and relations as semantic edges, CAGED augments a KG into different
hyper-views, by regarding each relational triple as a node. After joint
training with KG embedding and contrastive learning loss, CAGED assesses the
trustworthiness of each triple based on two learning signals, i.e., the
consistency of triple representations across multi-views and the
self-consistency within the triple. Extensive experiments on three real-world
KGs show that CAGED outperforms state-of-the-art methods in KG error detection.
Our codes and datasets are available at https://github.com/Qing145/CAGED.git.
Related papers
- Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - On the Sweet Spot of Contrastive Views for Knowledge-enhanced
Recommendation [49.18304766331156]
We propose a new contrastive learning framework for KG-enhanced recommendation.
We construct two separate contrastive views for KG and IG, and maximize their mutual information.
Extensive experimental results on three real-world datasets demonstrate the effectiveness and efficiency of our method.
arXiv Detail & Related papers (2023-09-23T14:05:55Z) - Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph
Completion [69.55700751102376]
Few-shot knowledge graph completion (FKGC) aims to predict missing facts for unseen relations with few-shot associated facts.
Existing FKGC methods are based on metric learning or meta-learning, which often suffer from the out-of-distribution and overfitting problems.
In this paper, we propose a normalizing flow-based neural process for few-shot knowledge graph completion (NP-FKGC)
arXiv Detail & Related papers (2023-04-17T11:42:28Z) - Mitigating Relational Bias on Knowledge Graphs [51.346018842327865]
We propose Fair-KGNN, a framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs.
We develop two instances of Fair-KGNN incorporating with two state-of-the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias.
arXiv Detail & Related papers (2022-11-26T05:55:34Z) - A Review of Knowledge Graph Completion [0.0]
Information extraction methods proved to be effective at triple extraction from structured or unstructured data.
Most of the current knowledge graphs are incomplete.
In order to use KGs in downstream tasks, it is desirable to predict missing links in KGs.
arXiv Detail & Related papers (2022-08-24T16:42:59Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - Link-Intensive Alignment for Incomplete Knowledge Graphs [28.213397255810936]
In this work, we address the problem of aligning incomplete KGs with representation learning.
Our framework exploits two feature channels: transitivity-based and proximity-based.
The two feature channels are jointly learned to exchange important features between the input KGs.
Also, we develop a missing links detector that discovers and recovers the missing links during the training process.
arXiv Detail & Related papers (2021-12-17T00:41:28Z) - FedE: Embedding Knowledge Graphs in Federated Setting [21.022513922373207]
Multi-Source KG is a common situation in real Knowledge Graph applications.
Because of the data privacy and sensitivity, a set of relevant knowledge graphs cannot complement each other's KGC by just collecting data from different knowledge graphs together.
We propose a Federated Knowledge Graph Embedding framework FedE, focusing on learning knowledge graph embeddings by aggregating locally-computed updates.
arXiv Detail & Related papers (2020-10-24T11:52:05Z) - Efficient Knowledge Graph Validation via Cross-Graph Representation
Learning [40.570585195713704]
noisy facts are unavoidably introduced into Knowledge Graphs that could be caused by automatic extraction.
We propose a cross-graph representation learning framework, i.e., CrossVal, which can leverage an external KG to validate the facts in the target KG efficiently.
arXiv Detail & Related papers (2020-08-16T20:51:17Z) - What is Normal, What is Strange, and What is Missing in a Knowledge
Graph: Unified Characterization via Inductive Summarization [34.3446695203147]
We introduce a unified solution to KG characterization by formulating the problem as unsupervised KG summarization.
KGist learns a summary of inductive rules that best compress the KG according to the Minimum Description Length principle.
We show that KGist outperforms task-specific, supervised and unsupervised baselines in error detection and incompleteness identification.
arXiv Detail & Related papers (2020-03-23T17:38:31Z) - On the Role of Conceptualization in Commonsense Knowledge Graph
Construction [59.39512925793171]
Commonsense knowledge graphs (CKGs) like Atomic and ASER are substantially different from conventional KGs.
We introduce to CKG construction methods conceptualization to view entities mentioned in text as instances of specific concepts or vice versa.
Our methods can effectively identify plausible triples and expand the KG by triples of both new nodes and edges of high diversity and novelty.
arXiv Detail & Related papers (2020-03-06T14:35:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.