Deep Structural Knowledge Exploitation and Synergy for Estimating Node
Importance Value on Heterogeneous Information Networks
- URL: http://arxiv.org/abs/2402.12411v1
- Date: Mon, 19 Feb 2024 02:34:23 GMT
- Title: Deep Structural Knowledge Exploitation and Synergy for Estimating Node
Importance Value on Heterogeneous Information Networks
- Authors: Yankai Chen, Yixiang Fang, Qiongyan Wang, Xin Cao, Irwin King
- Abstract summary: We propose a novel learning framework: SKES.
It exploits heterogeneous structural knowledge to enrich the informativeness of node representations.
Extensive experiments on three widely-evaluated benchmarks demonstrate the performance superiority of SKES over several recent competing methods.
- Score: 44.032793988845896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Node importance estimation problem has been studied conventionally with
homogeneous network topology analysis. To deal with network heterogeneity, a
few recent methods employ graph neural models to automatically learn diverse
sources of information. However, the major concern revolves around that their
full adaptive learning process may lead to insufficient information
exploration, thereby formulating the problem as the isolated node value
prediction with underperformance and less interpretability. In this work, we
propose a novel learning framework: SKES. Different from previous automatic
learning designs, SKES exploits heterogeneous structural knowledge to enrich
the informativeness of node representations. Based on a sufficiently
uninformative reference, SKES estimates the importance value for any input
node, by quantifying its disparity against the reference. This establishes an
interpretable node importance computation paradigm. Furthermore, SKES dives
deep into the understanding that "nodes with similar characteristics are prone
to have similar importance values" whilst guaranteeing that such
informativeness disparity between any different nodes is orderly reflected by
the embedding distance of their associated latent features. Extensive
experiments on three widely-evaluated benchmarks demonstrate the performance
superiority of SKES over several recent competing methods.
Related papers
- Impact of network topology on the performance of Decentralized Federated
Learning [4.618221836001186]
Decentralized machine learning is gaining momentum, addressing infrastructure challenges and privacy concerns.
This study investigates the interplay between network structure and learning performance using three network topologies and six data distribution methods.
We highlight the challenges in transferring knowledge from peripheral to central nodes, attributed to a dilution effect during model aggregation.
arXiv Detail & Related papers (2024-02-28T11:13:53Z) - KMF: Knowledge-Aware Multi-Faceted Representation Learning for Zero-Shot
Node Classification [75.95647590619929]
Zero-Shot Node Classification (ZNC) has been an emerging and crucial task in graph data analysis.
We propose a Knowledge-Aware Multi-Faceted framework (KMF) that enhances the richness of label semantics.
A novel geometric constraint is developed to alleviate the problem of prototype drift caused by node information aggregation.
arXiv Detail & Related papers (2023-08-15T02:38:08Z) - KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks [63.531790269009704]
Social Internet of Things (SIoT) is a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things)
Due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation.
We propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT.
arXiv Detail & Related papers (2023-02-22T14:24:45Z) - STERLING: Synergistic Representation Learning on Bipartite Graphs [78.86064828220613]
A fundamental challenge of bipartite graph representation learning is how to extract node embeddings.
Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs.
We introduce a novel synergistic representation learning model (STERLING) to learn node embeddings without negative node pairs.
arXiv Detail & Related papers (2023-01-25T03:21:42Z) - Tackling Oversmoothing of GNNs with Contrastive Learning [35.88575306925201]
Graph neural networks (GNNs) integrate the comprehensive relation of graph data and representation learning capability.
Oversmoothing makes the final representations of nodes indiscriminative, thus deteriorating the node classification and link prediction performance.
We propose the Topology-guided Graph Contrastive Layer, named TGCL, which is the first de-oversmoothing method maintaining all three mentioned metrics.
arXiv Detail & Related papers (2021-10-26T15:56:16Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Feature Importance Estimation with Self-Attention Networks [0.0]
Black-box neural network models are widely used in industry and science, yet are hard to understand and interpret.
Recently, the attention mechanism was introduced, offering insights into the inner workings of neural language models.
This paper explores the use of attention-based neural networks mechanism for estimating feature importance, as means for explaining the models learned from propositional (tabular) data.
arXiv Detail & Related papers (2020-02-11T15:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.