Graph Classification via Discriminative Edge Feature Learning
- URL: http://arxiv.org/abs/2210.02060v1
- Date: Wed, 5 Oct 2022 07:30:21 GMT
- Title: Graph Classification via Discriminative Edge Feature Learning
- Authors: Yang Yi, Xuequan Lu, Shang Gao, Antonio Robles-Kelly, Yuejie Zhang
- Abstract summary: Spectral graph convolutional neural networks (GCNNs) have been producing encouraging results in graph classification tasks.
We design an edge feature scheme and an add-on layer between every two stacked graph convolution layers in GCNN.
Our method outperforms state-of-the-art graph classification methods on three new graph datasets.
- Score: 17.86550507456848
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spectral graph convolutional neural networks (GCNNs) have been producing
encouraging results in graph classification tasks. However, most spectral GCNNs
utilize fixed graphs when aggregating node features, while omitting edge
feature learning and failing to get an optimal graph structure. Moreover, many
existing graph datasets do not provide initialized edge features, further
restraining the ability of learning edge features via spectral GCNNs. In this
paper, we try to address this issue by designing an edge feature scheme and an
add-on layer between every two stacked graph convolution layers in GCNN. Both
are lightweight while effective in filling the gap between edge feature
learning and performance enhancement of graph classification. The edge feature
scheme makes edge features adapt to node representations at different graph
convolution layers. The add-on layers help adjust the edge features to an
optimal graph structure. To test the effectiveness of our method, we take
Euclidean positions as initial node features and extract graphs with semantic
information from point cloud objects. The node features of our extracted graphs
are more scalable for edge feature learning than most existing graph datasets
(in one-hot encoded label format). Three new graph datasets are constructed
based on ModelNet40, ModelNet10 and ShapeNet Part datasets. Experimental
results show that our method outperforms state-of-the-art graph classification
methods on the new datasets by reaching 96.56% overall accuracy on
Graph-ModelNet40, 98.79% on Graph-ModelNet10 and 97.91% on Graph-ShapeNet Part.
The constructed graph datasets will be released to the community.
Related papers
- Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Learning Adaptive Neighborhoods for Graph Neural Networks [45.94778766867247]
Graph convolutional networks (GCNs) enable end-to-end learning on graph structured data.
We propose a novel end-to-end differentiable graph generator which builds graph topologies.
Our module can be readily integrated into existing pipelines involving graph convolution operations.
arXiv Detail & Related papers (2023-07-18T08:37:25Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - GraphCrop: Subgraph Cropping for Graph Classification [36.33477716380905]
We develop the textbfGraphCrop (Subgraph Cropping) data augmentation method to simulate the real-world noise of sub-structure omission.
By preserving the valid structure contexts for graph classification, we encourage GNNs to understand the content of graph structures in a global sense.
arXiv Detail & Related papers (2020-09-22T14:05:41Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.