CatGCN: Graph Convolutional Networks with Categorical Node Features
- URL: http://arxiv.org/abs/2009.05303v4
- Date: Wed, 9 Mar 2022 10:46:22 GMT
- Title: CatGCN: Graph Convolutional Networks with Categorical Node Features
- Authors: Weijian Chen, Fuli Feng, Qifan Wang, Xiangnan He, Chonggang Song,
Guohui Ling, Yongdong Zhang
- Abstract summary: CatGCN is tailored for graph learning when the node features are categorical.
We train CatGCN in an end-to-end fashion and demonstrate it on semi-supervised node classification.
- Score: 99.555850712725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies on Graph Convolutional Networks (GCNs) reveal that the initial
node representations (i.e., the node representations before the first-time
graph convolution) largely affect the final model performance. However, when
learning the initial representation for a node, most existing work linearly
combines the embeddings of node features, without considering the interactions
among the features (or feature embeddings). We argue that when the node
features are categorical, e.g., in many real-world applications like user
profiling and recommender system, feature interactions usually carry important
signals for predictive analytics. Ignoring them will result in suboptimal
initial node representation and thus weaken the effectiveness of the follow-up
graph convolution. In this paper, we propose a new GCN model named CatGCN,
which is tailored for graph learning when the node features are categorical.
Specifically, we integrate two ways of explicit interaction modeling into the
learning of initial node representation, i.e., local interaction modeling on
each pair of node features and global interaction modeling on an artificial
feature graph. We then refine the enhanced initial node representations with
the neighborhood aggregation-based graph convolution. We train CatGCN in an
end-to-end fashion and demonstrate it on semi-supervised node classification.
Extensive experiments on three tasks of user profiling (the prediction of user
age, city, and purchase level) from Tencent and Alibaba datasets validate the
effectiveness of CatGCN, especially the positive effect of performing feature
interaction modeling before graph convolution.
Related papers
- Saliency-Aware Regularized Graph Neural Network [39.82009838086267]
We propose the Saliency-Aware Regularized Graph Neural Network (SAR-GNN) for graph classification.
We first estimate the global node saliency by measuring the semantic similarity between the compact graph representation and node features.
Then the learned saliency distribution is leveraged to regularize the neighborhood aggregation of the backbone.
arXiv Detail & Related papers (2024-01-01T13:44:16Z) - Point-Voxel Absorbing Graph Representation Learning for Event Stream
based Recognition [46.80940095322873]
We propose a novel dual point-voxel absorbing graph representation learning for event stream data representation.
The key aspect of the proposed AGCN is its ability to effectively capture the importance of nodes and thus be fully aware of node representations.
arXiv Detail & Related papers (2023-06-08T14:38:43Z) - Neighborhood Convolutional Network: A New Paradigm of Graph Neural
Networks for Node Classification [12.062421384484812]
Graph Convolutional Network (GCN) decouples neighborhood aggregation and feature transformation in each convolutional layer.
In this paper, we propose a new paradigm of GCN, termed Neighborhood Convolutional Network (NCN)
In this way, the model could inherit the merit of decoupled GCN for aggregating neighborhood information, at the same time, develop much more powerful feature learning modules.
arXiv Detail & Related papers (2022-11-15T02:02:51Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Data Augmentation for Graph Convolutional Network on Semi-Supervised
Classification [6.619370466850894]
We study the problem of graph data augmentation for Graph Convolutional Network (GCN)
Specifically, we conduct cosine similarity based cross operation on the original features to create new graph features, including new node attributes.
We also propose an attentional integrating model to weighted sum the hidden node embeddings encoded by these GCNs into the final node embeddings.
arXiv Detail & Related papers (2021-06-16T15:13:51Z) - Node Similarity Preserving Graph Convolutional Networks [51.520749924844054]
Graph Neural Networks (GNNs) explore the graph structure and node features by aggregating and transforming information within node neighborhoods.
We propose SimP-GCN that can effectively and efficiently preserve node similarity while exploiting graph structure.
We validate the effectiveness of SimP-GCN on seven benchmark datasets including three assortative and four disassorative graphs.
arXiv Detail & Related papers (2020-11-19T04:18:01Z) - GAIN: Graph Attention & Interaction Network for Inductive
Semi-Supervised Learning over Large-scale Graphs [18.23435958000212]
Graph Neural Networks (GNNs) have led to state-of-the-art performance on a variety of machine learning tasks such as recommendation, node classification and link prediction.
Most existing GNN models exploit a single type of aggregator to aggregate neighboring nodes information.
We propose a novel graph neural network architecture, Graph Attention & Interaction Network (GAIN), for inductive learning on graphs.
arXiv Detail & Related papers (2020-11-03T00:20:24Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z) - Bilinear Graph Neural Network with Neighbor Interactions [106.80781016591577]
Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data.
We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes.
We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes.
arXiv Detail & Related papers (2020-02-10T06:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.