Duplicate Detection as a Service
- URL: http://arxiv.org/abs/2207.09672v1
- Date: Wed, 20 Jul 2022 06:02:11 GMT
- Title: Duplicate Detection as a Service
- Authors: Juliette Opdenplatz and Umutcan \c{S}im\c{s}ek and Dieter Fensel
- Abstract summary: Duplicate detection aims to find identity links between instances of knowledge graphs.
Current solutions to the problem require expert knowledge of the tool and the knowledge graph they are applied to.
We present our service-based approach to the duplicate detection task that provides an easy-to-use no-code solution.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Completeness of a knowledge graph is an important quality dimension and
factor on how well an application that makes use of it performs. Completeness
can be improved by performing knowledge enrichment. Duplicate detection aims to
find identity links between the instances of knowledge graphs and is a
fundamental subtask of knowledge enrichment. Current solutions to the problem
require expert knowledge of the tool and the knowledge graph they are applied
to. Users might not have this expert knowledge. We present our service-based
approach to the duplicate detection task that provides an easy-to-use no-code
solution that is still competitive with the state-of-the-art and has recently
been adopted in an industrial context. The evaluation will be based on several
frequently used test scenarios.
Related papers
- Knowledge Graph Extension by Entity Type Recognition [2.8231106019727195]
We propose a novel knowledge graph extension framework based on entity type recognition.
The framework aims to achieve high-quality knowledge extraction by aligning the schemas and entities across different knowledge graphs.
arXiv Detail & Related papers (2024-05-03T19:55:03Z) - Collaborative Knowledge Infusion for Low-resource Stance Detection [83.88515573352795]
Target-related knowledge is often needed to assist stance detection models.
We propose a collaborative knowledge infusion approach for low-resource stance detection tasks.
arXiv Detail & Related papers (2024-03-28T08:32:14Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - KGrEaT: A Framework to Evaluate Knowledge Graphs via Downstream Tasks [1.8722948221596285]
KGrEaT is a framework to estimate the quality of knowledge graphs via actual downstream tasks like classification, clustering, or recommendation.
The framework takes a knowledge graph as input, automatically maps it to the datasets to be evaluated on, and computes performance metrics for the defined tasks.
arXiv Detail & Related papers (2023-08-21T07:43:10Z) - Leveraging Skill-to-Skill Supervision for Knowledge Tracing [13.753990664747265]
Knowledge tracing plays a pivotal role in intelligent tutoring systems.
Recent advances in knowledge tracing models have enabled better exploitation of problem solving history.
Knowledge tracing algorithms that incorporate knowledge directly are important to settings with limited data or cold starts.
arXiv Detail & Related papers (2023-06-12T03:23:22Z) - Knowledge-augmented Deep Learning and Its Applications: A Survey [60.221292040710885]
knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
arXiv Detail & Related papers (2022-11-30T03:44:15Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Incorporating Explicit Knowledge in Pre-trained Language Models for
Passage Re-ranking [32.22697200984185]
We propose a novel knowledge graph distillation method and obtain a knowledge meta graph as the bridge between query and passage.
To align both kinds of embedding in the latent space, we employ PLM as text encoder and graph neural network over knowledge meta graph as knowledge encoder.
arXiv Detail & Related papers (2022-04-25T14:07:28Z) - Conditional Attention Networks for Distilling Knowledge Graphs in
Recommendation [74.14009444678031]
We propose Knowledge-aware Conditional Attention Networks (KCAN) to incorporate knowledge graph into a recommender system.
We use a knowledge-aware attention propagation manner to obtain the node representation first, which captures the global semantic similarity on the user-item network and the knowledge graph.
Then, by applying a conditional attention aggregation on the subgraph, we refine the knowledge graph to obtain target-specific node representations.
arXiv Detail & Related papers (2021-11-03T09:40:43Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.