What can knowledge graph alignment gain with Neuro-Symbolic learning
approaches?
- URL: http://arxiv.org/abs/2310.07417v1
- Date: Wed, 11 Oct 2023 12:03:19 GMT
- Title: What can knowledge graph alignment gain with Neuro-Symbolic learning
approaches?
- Authors: Pedro Giesteira Cotovio, Ernesto Jimenez-Ruiz, Catia Pesquita
- Abstract summary: Knowledge Graphs (KGs) are the backbone of many data-intensive applications.
Current algorithms fail to articulate logical thinking and reasoning with lexical, structural, and semantic data learning.
This paper examines the current state of the art in KGA and explores the potential for neurosymbolic integration.
- Score: 1.8416014644193066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge Graphs (KG) are the backbone of many data-intensive applications
since they can represent data coupled with its meaning and context. Aligning
KGs across different domains and providers is necessary to afford a fuller and
integrated representation. A severe limitation of current KG alignment (KGA)
algorithms is that they fail to articulate logical thinking and reasoning with
lexical, structural, and semantic data learning. Deep learning models are
increasingly popular for KGA inspired by their good performance in other tasks,
but they suffer from limitations in explainability, reasoning, and data
efficiency. Hybrid neurosymbolic learning models hold the promise of
integrating logical and data perspectives to produce high-quality alignments
that are explainable and support validation through human-centric approaches.
This paper examines the current state of the art in KGA and explores the
potential for neurosymbolic integration, highlighting promising research
directions for combining these fields.
Related papers
- Towards Graph Prompt Learning: A Survey and Beyond [38.55555996765227]
Large-scale "pre-train and prompt learning" paradigms have demonstrated remarkable adaptability.
This survey categorizes over 100 relevant works in this field, summarizing general design principles and the latest applications.
arXiv Detail & Related papers (2024-08-26T06:36:42Z) - G-SAP: Graph-based Structure-Aware Prompt Learning over Heterogeneous Knowledge for Commonsense Reasoning [8.02547453169677]
We propose a novel Graph-based Structure-Aware Prompt Learning Model for commonsense reasoning, named G-SAP.
In particular, an evidence graph is constructed by integrating multiple knowledge sources, i.e. ConceptNet, Wikipedia, and Cambridge Dictionary.
The results reveal a significant advancement over the existing models, especially, with 6.12% improvement over the SoTA LM+GNNs model on the OpenbookQA dataset.
arXiv Detail & Related papers (2024-05-09T08:28:12Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - Representation Learning for Person or Entity-centric Knowledge Graphs:
An Application in Healthcare [0.757843972001219]
This paper presents an end-to-end representation learning framework to extract entity-centric KGs from structured and unstructured data.
We introduce a star-shaped classifier to represent the multiple facets of a person and use it to guide KG creation.
We highlight that this approach has several potential applications across domains and is open-sourced.
arXiv Detail & Related papers (2023-05-09T17:39:45Z) - KGNN: Distributed Framework for Graph Neural Knowledge Representation [38.080926752998586]
We develop a novel framework called KGNN to take full advantage of knowledge data for representation learning in the distributed learning system.
KGNN is equipped with GNN based encoder and knowledge aware decoder, which aim to jointly explore high-order structure and attribute information together.
arXiv Detail & Related papers (2022-05-17T12:32:02Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z) - Embedding Graph Auto-Encoder for Graph Clustering [90.8576971748142]
Graph auto-encoder (GAE) models are based on semi-supervised graph convolution networks (GCN)
We design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding Graph Auto-Encoder (EGAE)
EGAE consists of one encoder and dual decoders.
arXiv Detail & Related papers (2020-02-20T09:53:28Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.