Mapping Patterns for Virtual Knowledge Graphs
- URL: http://arxiv.org/abs/2012.01917v2
- Date: Fri, 11 Aug 2023 09:45:49 GMT
- Title: Mapping Patterns for Virtual Knowledge Graphs
- Authors: Diego Calvanese and Avigdor Gal and Davide Lanti and Marco Montali and
Alessandro Mosca and Roee Shraga
- Abstract summary: Virtual Knowledge Graphs (VKG) constitute one of the most promising paradigms for integrating and accessing legacy data sources.
We build on well-established methodologies and patterns studied in data management, data analysis, and conceptual modeling.
We validate our catalog on the considered VKG scenarios, showing it covers the vast majority of patterns present therein.
- Score: 71.61234136161742
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Virtual Knowledge Graphs (VKG) constitute one of the most promising paradigms
for integrating and accessing legacy data sources. A critical bottleneck in the
integration process involves the definition, validation, and maintenance of
mappings that link data sources to a domain ontology. To support the management
of mappings throughout their entire lifecycle, we propose a comprehensive
catalog of sophisticated mapping patterns that emerge when linking databases to
ontologies. To do so, we build on well-established methodologies and patterns
studied in data management, data analysis, and conceptual modeling. These are
extended and refined through the analysis of concrete VKG benchmarks and
real-world use cases, and considering the inherent impedance mismatch between
data sources and ontologies. We validate our catalog on the considered VKG
scenarios, showing that it covers the vast majority of patterns present
therein.
Related papers
- Integrating Large Language Models and Knowledge Graphs for Extraction and Validation of Textual Test Data [3.114910206366326]
Aerospace manufacturing companies, such as Thales Alenia Space, design, develop, integrate, verify, and validate products.
We propose a hybrid methodology that leverages Knowledge Graphs (KGs) in conjunction with Large Language Models (LLMs) to extract and validate data.
arXiv Detail & Related papers (2024-08-03T07:42:53Z) - Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective [60.64922606733441]
We introduce a mathematical model that formalizes relational learning as hypergraph recovery to study pre-training of Foundation Models (FMs)
In our framework, the world is represented as a hypergraph, with data abstracted as random samples from hyperedges. We theoretically examine the feasibility of a Pre-Trained Model (PTM) to recover this hypergraph and analyze the data efficiency in a minimax near-optimal style.
arXiv Detail & Related papers (2024-06-17T06:20:39Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Towards a Gateway for Knowledge Graph Schemas Collection, Analysis, and
Embedding [10.19939896927137]
This paper describes the Live Semantic Web initiative, namely a first version of a gateway that has the main scope of leveraging the gold mine of relational data collected by many existing knowledge graphs.
arXiv Detail & Related papers (2023-11-21T09:22:02Z) - Variational Interpretable Learning from Multi-view Data [2.687817337319978]
DICCA is designed to disentangle both the shared and view-specific variations for multi-view data.
Empirical results on real-world datasets show that our methods are competitive across domains.
arXiv Detail & Related papers (2022-02-28T01:56:44Z) - End-to-End Hierarchical Relation Extraction for Generic Form
Understanding [0.6299766708197884]
We present a novel deep neural network to jointly perform both entity detection and link prediction.
Our model extends the Multi-stage Attentional U-Net architecture with the Part-Intensity Fields and Part-Association Fields for link prediction.
We demonstrate the effectiveness of the model on the Form Understanding in Noisy Scanned Documents dataset.
arXiv Detail & Related papers (2021-06-02T06:51:35Z) - A Variational Information Bottleneck Approach to Multi-Omics Data
Integration [98.6475134630792]
We propose a deep variational information bottleneck (IB) approach for incomplete multi-view observations.
Our method applies the IB framework on marginal and joint representations of the observed views to focus on intra-view and inter-view interactions that are relevant for the target.
Experiments on real-world datasets show that our method consistently achieves gain from data integration and outperforms state-of-the-art benchmarks.
arXiv Detail & Related papers (2021-02-05T06:05:39Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - PPKE: Knowledge Representation Learning by Path-based Pre-training [43.41597219004598]
We propose a Path-based Pre-training model to learn Knowledge Embeddings, called PPKE.
Our model achieves state-of-the-art results on several benchmark datasets for link prediction and relation prediction tasks.
arXiv Detail & Related papers (2020-12-07T10:29:30Z) - Generative Partial Multi-View Clustering [133.36721417531734]
We propose a generative partial multi-view clustering model, named as GP-MVC, to address the incomplete multi-view problem.
First, multi-view encoder networks are trained to learn common low-dimensional representations, followed by a clustering layer to capture the consistent cluster structure across multiple views.
Second, view-specific generative adversarial networks are developed to generate the missing data of one view conditioning on the shared representation given by other views.
arXiv Detail & Related papers (2020-03-29T17:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.