Automatic Semantic Modeling for Structural Data Source with the Prior
Knowledge from Knowledge Base
- URL: http://arxiv.org/abs/2212.10915v1
- Date: Wed, 21 Dec 2022 10:54:59 GMT
- Title: Automatic Semantic Modeling for Structural Data Source with the Prior
Knowledge from Knowledge Base
- Authors: Jiakang Xu, Wolfgang Mayer, HongYu Zhang, Keqing He, Zaiwen Feng
- Abstract summary: We propose a novel method for semantically annotating structured data sources using machine learning, graph matching and modified frequent subgraph mining.
Our approach outperforms two state-of-theart solutions in tricky cases where only a few models are known.
- Score: 15.075047172918547
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A critical step in sharing semantic content online is to map the structural
data source to a public domain ontology. This problem is denoted as the
Relational-To-Ontology Mapping Problem (Rel2Onto). A huge effort and expertise
are required for manually modeling the semantics of data. Therefore, an
automatic approach for learning the semantics of a data source is desirable.
Most of the existing work studies the semantic annotation of source attributes.
However, although critical, the research for automatically inferring the
relationships between attributes is very limited. In this paper, we propose a
novel method for semantically annotating structured data sources using machine
learning, graph matching and modified frequent subgraph mining to amend the
candidate model. In our work, Knowledge graph is used as prior knowledge. Our
evaluation shows that our approach outperforms two state-of-the-art solutions
in tricky cases where only a few semantic models are known.
Related papers
- End-to-End Ontology Learning with Large Language Models [11.755755139228219]
Large language models (LLMs) have been applied to solve various subtasks of ontology learning.
We address this gap by OLLM, a general and scalable method for building the taxonomic backbone of an ontology from scratch.
In contrast to standard metrics, our metrics use deep learning techniques to define more robust structural distance measures between graphs.
Our model can be effectively adapted to new domains, like arXiv, needing only a small number of training examples.
arXiv Detail & Related papers (2024-10-31T02:52:39Z) - A topic-aware graph neural network model for knowledge base updating [0.6875312133832077]
Key challenge is to maintain an up-to-date knowledge base.
Current knowledge base updating methods determine whether entities need to be updated.
We construct a topic-aware graph network for knowledge updating based on the user query log.
arXiv Detail & Related papers (2022-08-31T02:35:23Z) - MapRE: An Effective Semantic Mapping Approach for Low-resource Relation
Extraction [11.821464352959454]
We propose a framework considering both label-agnostic and label-aware semantic mapping information for low resource relation extraction.
We show that incorporating the above two types of mapping information in both pretraining and fine-tuning can significantly improve the model performance.
arXiv Detail & Related papers (2021-09-09T09:02:23Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic
Parsing [85.35582118010608]
Task-oriented semantic parsing is a critical component of virtual assistants.
Recent advances in deep learning have enabled several approaches to successfully parse more complex queries.
We propose a novel method that outperforms a supervised neural model at a 10-fold data reduction.
arXiv Detail & Related papers (2020-10-07T17:47:53Z) - Extracting Semantic Concepts and Relations from Scientific Publications
by Using Deep Learning [0.0]
The aim of this paper is to introduce a proposal of automatically extracting semantic concepts and relations from scientific publications.
This paper suggests new types of semantic relations and points out of using deep learning (DL) models for semantic relation extraction.
arXiv Detail & Related papers (2020-09-01T10:19:18Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - A Simple Approach to Case-Based Reasoning in Knowledge Bases [56.661396189466664]
We present a surprisingly simple yet accurate approach to reasoning in knowledge graphs (KGs) that requires emphno training, and is reminiscent of case-based reasoning in classical artificial intelligence (AI)
Consider the task of finding a target entity given a source entity and a binary relation.
Our non-parametric approach derives crisp logical rules for each query by finding multiple textitgraph path patterns that connect similar source entities through the given relation.
arXiv Detail & Related papers (2020-06-25T06:28:09Z) - Inferential Text Generation with Multiple Knowledge Sources and
Meta-Learning [117.23425857240679]
We study the problem of generating inferential texts of events for a variety of commonsense like textitif-else relations.
Existing approaches typically use limited evidence from training examples and learn for each relation individually.
In this work, we use multiple knowledge sources as fuels for the model.
arXiv Detail & Related papers (2020-04-07T01:49:18Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.