IRT2: Inductive Linking and Ranking in Knowledge Graphs of Varying Scale
- URL: http://arxiv.org/abs/2301.00716v1
- Date: Mon, 2 Jan 2023 15:19:21 GMT
- Title: IRT2: Inductive Linking and Ranking in Knowledge Graphs of Varying Scale
- Authors: Felix Hamann, Adrian Ulges, Maurice Falk
- Abstract summary: We address the challenge of building domain-specific knowledge models for industrial use cases.
Our focus is on inductive link prediction models as a basis for practical tools.
- Score: 1.3621712165154805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the challenge of building domain-specific knowledge models for
industrial use cases, where labelled data and taxonomic information is
initially scarce. Our focus is on inductive link prediction models as a basis
for practical tools that support knowledge engineers with exploring text
collections and discovering and linking new (so-called open-world) entities to
the knowledge graph. We argue that - though neural approaches to text mining
have yielded impressive results in the past years - current benchmarks do not
reflect the typical challenges encountered in the industrial wild properly.
Therefore, our first contribution is an open benchmark coined IRT2 (inductive
reasoning with text) that (1) covers knowledge graphs of varying sizes
(including very small ones), (2) comes with incidental, low-quality text
mentions, and (3) includes not only triple completion but also ranking, which
is relevant for supporting experts with discovery tasks.
We investigate two neural models for inductive link prediction, one based on
end-to-end learning and one that learns from the knowledge graph and text data
in separate steps. These models compete with a strong bag-of-words baseline.
The results show a significant advance in performance for the neural approaches
as soon as the available graph data decreases for linking. For ranking, the
results are promising, and the neural approaches outperform the sparse
retriever by a wide margin.
Related papers
- Graphusion: Leveraging Large Language Models for Scientific Knowledge Graph Fusion and Construction in NLP Education [14.368011453534596]
We introduce Graphusion, a zero-shot knowledge graph framework from free text.
The core fusion module provides a global view of triplets, incorporating entity merging, conflict resolution, and novel triplet discovery.
Our evaluation demonstrates that Graphusion surpasses supervised baselines by up to 10% in accuracy on link prediction.
arXiv Detail & Related papers (2024-07-15T15:13:49Z) - Rethinking the Effectiveness of Graph Classification Datasets in Benchmarks for Assessing GNNs [7.407592553310068]
We propose an empirical protocol based on a fair benchmarking framework to investigate the performance discrepancy between simple methods and GNNs.
We also propose a novel metric to quantify the dataset effectiveness by considering both dataset complexity and model performance.
Our findings shed light on the current understanding of benchmark datasets, and our new platform could fuel the future evolution of graph classification benchmarks.
arXiv Detail & Related papers (2024-07-06T08:33:23Z) - G-SAP: Graph-based Structure-Aware Prompt Learning over Heterogeneous Knowledge for Commonsense Reasoning [8.02547453169677]
We propose a novel Graph-based Structure-Aware Prompt Learning Model for commonsense reasoning, named G-SAP.
In particular, an evidence graph is constructed by integrating multiple knowledge sources, i.e. ConceptNet, Wikipedia, and Cambridge Dictionary.
The results reveal a significant advancement over the existing models, especially, with 6.12% improvement over the SoTA LM+GNNs model on the OpenbookQA dataset.
arXiv Detail & Related papers (2024-05-09T08:28:12Z) - Exploring Large Language Models for Knowledge Graph Completion [17.139056629060626]
We consider triples in knowledge graphs as text sequences and introduce an innovative framework called Knowledge Graph LLM.
Our technique employs entity and relation descriptions of a triple as prompts and utilizes the response for predictions.
Experiments on various benchmark knowledge graphs demonstrate that our method attains state-of-the-art performance in tasks such as triple classification and relation prediction.
arXiv Detail & Related papers (2023-08-26T16:51:17Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Benchmarking Node Outlier Detection on Graphs [90.29966986023403]
Graph outlier detection is an emerging but crucial machine learning task with numerous applications.
We present the first comprehensive unsupervised node outlier detection benchmark for graphs called UNOD.
arXiv Detail & Related papers (2022-06-21T01:46:38Z) - A Graph-Enhanced Click Model for Web Search [67.27218481132185]
We propose a novel graph-enhanced click model (GraphCM) for web search.
We exploit both intra-session and inter-session information for the sparsity and cold-start problems.
arXiv Detail & Related papers (2022-06-17T08:32:43Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.