A Graph Neural Network with Negative Message Passing for Graph Coloring
- URL: http://arxiv.org/abs/2301.11164v1
- Date: Thu, 26 Jan 2023 15:08:42 GMT
- Title: A Graph Neural Network with Negative Message Passing for Graph Coloring
- Authors: Xiangyu Wang, Xueming Yan, Yaochu Jin
- Abstract summary: We propose a graph network model for graph coloring, which is a class of representative heterophilous problems.
We introduce negative message passing into the proposed graph neural network for more effective information exchange.
New loss function taking into account the self-information of the nodes is suggested to accelerate the learning process.
- Score: 12.501032566933178
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks have received increased attention over the past years
due to their promising ability to handle graph-structured data, which can be
found in many real-world problems such as recommended systems and drug
synthesis. Most existing research focuses on using graph neural networks to
solve homophilous problems, but little attention has been paid to
heterophily-type problems. In this paper, we propose a graph network model for
graph coloring, which is a class of representative heterophilous problems.
Different from the conventional graph networks, we introduce negative message
passing into the proposed graph neural network for more effective information
exchange in handling graph coloring problems. Moreover, a new loss function
taking into account the self-information of the nodes is suggested to
accelerate the learning process. Experimental studies are carried out to
compare the proposed graph model with five state-of-the-art algorithms on ten
publicly available graph coloring problems and one real-world application.
Numerical results demonstrate the effectiveness of the proposed graph neural
network.
Related papers
- Graph Residual Noise Learner Network for Brain Connectivity Graph Prediction [1.9116784879310031]
A morphological brain graph depicting a connectional fingerprint is of paramount importance for charting brain dysconnectivity patterns.
We propose the Graph Residual Noise Learner Network (Grenol-Net), the first graph diffusion model for predicting a target graph from a source graph.
arXiv Detail & Related papers (2024-09-30T17:28:38Z) - A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Learning Graph Representations [0.0]
Graph Neural Networks (GNNs) are efficient ways to get insight into large dynamic graph datasets.
In this paper, we discuss the graph convolutional neural networks graph autoencoders and Social-temporal graph neural networks.
arXiv Detail & Related papers (2021-02-03T12:07:55Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Get Rid of Suspended Animation Problem: Deep Diffusive Neural Network on
Graph Semi-Supervised Classification [10.879701971582502]
We propose a new graph neural network, namely DIFNET, for graph representation learning and node classification.
Extensive experiments will be done in this paper to compare DIFNET against several state-of-the-art graph neural network models.
arXiv Detail & Related papers (2020-01-22T09:19:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.