Commonsense Knowledge Graph Completion Via Contrastive Pretraining and
Node Clustering
- URL: http://arxiv.org/abs/2305.17019v1
- Date: Fri, 26 May 2023 15:24:32 GMT
- Title: Commonsense Knowledge Graph Completion Via Contrastive Pretraining and
Node Clustering
- Authors: Siwei Wu, Xiangqing Shen, Rui Xia
- Abstract summary: We propose a new CSKG completion framework based on Contrastive Pretraining and Node Clustering.
We evaluate our CPNC approach on two CSKG completion benchmarks (CN-100K and ATOMIC), where CPNC outperforms the state-of-the-art methods.
- Score: 14.854971279160933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The nodes in the commonsense knowledge graph (CSKG) are normally represented
by free-form short text (e.g., word or phrase). Different nodes may represent
the same concept. This leads to the problems of edge sparsity and node
redundancy, which challenges CSKG representation and completion. On the one
hand, edge sparsity limits the performance of graph representation learning; On
the other hand, node redundancy makes different nodes corresponding to the same
concept have inconsistent relations with other nodes. To address the two
problems, we propose a new CSKG completion framework based on Contrastive
Pretraining and Node Clustering (CPNC). Contrastive Pretraining constructs
positive and negative head-tail node pairs on CSKG and utilizes contrastive
learning to obtain better semantic node representation. Node Clustering
aggregates nodes with the same concept into a latent concept, assisting the
task of CSKG completion. We evaluate our CPNC approach on two CSKG completion
benchmarks (CN-100K and ATOMIC), where CPNC outperforms the state-of-the-art
methods. Extensive experiments demonstrate that both Contrastive Pretraining
and Node Clustering can significantly improve the performance of CSKG
completion. The source code of CPNC is publicly available on
\url{https://github.com/NUSTM/CPNC}.
Related papers
- EntailE: Introducing Textual Entailment in Commonsense Knowledge Graph
Completion [54.12709176438264]
Commonsense knowledge graphs (CSKGs) utilize free-form text to represent named entities, short phrases, and events as their nodes.
Current methods leverage semantic similarities to increase the graph density, but the semantic plausibility of the nodes and their relations are under-explored.
We propose to adopt textual entailment to find implicit entailment relations between CSKG nodes, to effectively densify the subgraph connecting nodes within the same conceptual class.
arXiv Detail & Related papers (2024-02-15T02:27:23Z) - A Topological Perspective on Demystifying GNN-Based Link Prediction
Performance [72.06314265776683]
Topological Concentration (TC) is based on the intersection of the local subgraph of each node with the ones of its neighbors.
We show that TC has a higher correlation with LP performance than other node-level topological metrics like degree and subgraph density.
We propose Approximated Topological Concentration (ATC) and theoretically/empirically justify its efficacy in approximating TC and reducing the complexity.
arXiv Detail & Related papers (2023-10-06T22:07:49Z) - Provable Training for Graph Contrastive Learning [58.8128675529977]
Graph Contrastive Learning (GCL) has emerged as a popular training approach for learning node embeddings from augmented graphs without labels.
We show that the training of GCL is indeed imbalanced across all nodes.
We propose the metric "node compactness", which is the lower bound of how a node follows the GCL principle.
arXiv Detail & Related papers (2023-09-25T08:23:53Z) - Clarify Confused Nodes via Separated Learning [4.282496716373314]
Graph neural networks (GNNs) have achieved remarkable advances in graph-oriented tasks.
Real-world graphs invariably contain a certain proportion of heterophilous nodes, challenging the homophily assumption of traditional GNNs.
We propose a new metric, termed Neighborhood Confusion (NC), to facilitate a more reliable separation of nodes.
arXiv Detail & Related papers (2023-06-04T07:26:20Z) - Cold Brew: Distilling Graph Node Representations with Incomplete or
Missing Neighborhoods [69.13371028670153]
We introduce feature-contribution ratio (FCR) to study the viability of using inductive GNNs to solve the Strict Cold Start (SCS) problem.
We experimentally show FCR disentangles the contributions of various components of graph datasets and demonstrate the superior performance of Cold Brew.
arXiv Detail & Related papers (2021-11-08T21:29:25Z) - Self-supervised Graph Learning for Recommendation [69.98671289138694]
We explore self-supervised learning on user-item graph for recommendation.
An auxiliary self-supervised task reinforces node representation learning via self-discrimination.
Empirical studies on three benchmark datasets demonstrate the effectiveness of SGL.
arXiv Detail & Related papers (2020-10-21T06:35:26Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.