Deep Kernel Supervised Hashing for Node Classification in Structural
Networks
- URL: http://arxiv.org/abs/2010.13582v2
- Date: Sat, 3 Apr 2021 00:45:02 GMT
- Title: Deep Kernel Supervised Hashing for Node Classification in Structural
Networks
- Authors: Jia-Nan Guo, Xian-Ling Mao, Shu-Yang Lin, Wei Wei and Heyan Huang
- Abstract summary: We propose a novel Deep Kernel Supervised Hashing (DKSH) method to learn the hashing representations of nodes for node classification.
The proposed method significantly outperforms the state-of-the-art baselines over three real world benchmark datasets.
- Score: 39.459721876872266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Node classification in structural networks has been proven to be useful in
many real world applications. With the development of network embedding, the
performance of node classification has been greatly improved. However, nearly
all the existing network embedding based methods are hard to capture the actual
category features of a node because of the linearly inseparable problem in
low-dimensional space; meanwhile they cannot incorporate simultaneously network
structure information and node label information into network embedding. To
address the above problems, in this paper, we propose a novel Deep Kernel
Supervised Hashing (DKSH) method to learn the hashing representations of nodes
for node classification. Specifically, a deep multiple kernel learning is first
proposed to map nodes into suitable Hilbert space to deal with linearly
inseparable problem. Then, instead of only considering structural similarity
between two nodes, a novel similarity matrix is designed to merge both network
structure information and node label information. Supervised by the similarity
matrix, the learned hashing representations of nodes simultaneously preserve
the two kinds of information well from the learned Hilbert space. Extensive
experiments show that the proposed method significantly outperforms the
state-of-the-art baselines over three real world benchmark datasets.
Related papers
- Contrastive Meta-Learning for Few-shot Node Classification [54.36506013228169]
Few-shot node classification aims to predict labels for nodes on graphs with only limited labeled nodes as references.
We create a novel contrastive meta-learning framework on graphs, named COSMIC, with two key designs.
arXiv Detail & Related papers (2023-06-27T02:22:45Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - On the Power of Gradual Network Alignment Using Dual-Perception
Similarities [14.779474659172923]
Network alignment (NA) is the task of finding the correspondence of nodes between two networks based on the network structure and node attributes.
Our study is motivated by the fact that, since most of existing NA methods have attempted to discover all node pairs at once, they do not harness information enriched through interim discovery of node correspondences.
We propose Grad-Align, a new NA method that gradually discovers node pairs by making full use of node pairs exhibiting strong consistency.
arXiv Detail & Related papers (2022-01-26T14:01:32Z) - DeHIN: A Decentralized Framework for Embedding Large-scale Heterogeneous
Information Networks [64.62314068155997]
We present textitDecentralized Embedding Framework for Heterogeneous Information Network (DeHIN) in this paper.
DeHIN presents a context preserving partition mechanism that innovatively formulates a large HIN as a hypergraph.
Our framework then adopts a decentralized strategy to efficiently partition HINs by adopting a tree-like pipeline.
arXiv Detail & Related papers (2022-01-08T04:08:36Z) - A Framework for Joint Unsupervised Learning of Cluster-Aware Embedding
for Heterogeneous Networks [6.900303913555705]
Heterogeneous Information Network (HIN) embedding refers to the low-dimensional projections of the HIN nodes that preserve the HIN structure and semantics.
We propose ours for joint learning of cluster embeddings as well as cluster-aware HIN embedding.
arXiv Detail & Related papers (2021-08-09T11:36:36Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - DINE: A Framework for Deep Incomplete Network Embedding [33.97952453310253]
We propose a Deep Incomplete Network Embedding method, namely DINE.
We first complete the missing part including both nodes and edges in a partially observable network by using the expectation-maximization framework.
We evaluate DINE over three networks on multi-label classification and link prediction tasks.
arXiv Detail & Related papers (2020-08-09T04:59:35Z) - Graph Neural Networks with Composite Kernels [60.81504431653264]
We re-interpret node aggregation from the perspective of kernel weighting.
We present a framework to consider feature similarity in an aggregation scheme.
We propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.
arXiv Detail & Related papers (2020-05-16T04:44:29Z) - A Block-based Generative Model for Attributed Networks Embedding [42.00826538556588]
We propose a block-based generative model for attributed network embedding from a probability perspective.
We use a neural network to characterize the nonlinearity between node embeddings and node attributes.
The results show that our proposed method consistently outperforms state-of-the-art embedding methods for both clustering and classification tasks.
arXiv Detail & Related papers (2020-01-06T03:44:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.