Graph Neural Machine: A New Model for Learning with Tabular Data
- URL: http://arxiv.org/abs/2402.02862v1
- Date: Mon, 5 Feb 2024 10:22:15 GMT
- Title: Graph Neural Machine: A New Model for Learning with Tabular Data
- Authors: Giannis Nikolentzos and Siyun Wang and Johannes Lutzeyer and Michalis
Vazirgiannis
- Abstract summary: Graph neural networks (GNNs) have recently become the standard tool for performing machine learning tasks on graphs.
In this work, we show that an representation is equivalent to an asynchronous message passing GNN model.
We then propose a new machine learning model for data, the so-called Graph Neural Machine (GNM)
- Score: 25.339493426758903
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there has been a growing interest in mapping data from
different domains to graph structures. Among others, neural network models such
as the multi-layer perceptron (MLP) can be modeled as graphs. In fact, MLPs can
be represented as directed acyclic graphs. Graph neural networks (GNNs) have
recently become the standard tool for performing machine learning tasks on
graphs. In this work, we show that an MLP is equivalent to an asynchronous
message passing GNN model which operates on the MLP's graph representation. We
then propose a new machine learning model for tabular data, the so-called Graph
Neural Machine (GNM), which replaces the MLP's directed acyclic graph with a
nearly complete graph and which employs a synchronous message passing scheme.
We show that a single GNM model can simulate multiple MLP models. We evaluate
the proposed model in several classification and regression datasets. In most
cases, the GNM model outperforms the MLP architecture.
Related papers
- KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning [27.638009679134523]
Graph Neural Networks (GNNs) have become the de facto tool for learning node and graph representations.
In this work, we compare the performance of Kolmogorov-Arnold Networks (KANs) against that of theorems in graph learning tasks.
arXiv Detail & Related papers (2024-06-26T14:21:21Z) - GraphAny: A Foundation Model for Node Classification on Any Graph [18.90340185554506]
Foundation models that can perform inference on any new task without requiring specific training have revolutionized machine learning in vision and language applications.
In this work, we tackle two challenges with a new foundational architecture for inductive node classification named GraphAny.
Specifically, we learn attention scores for each node to fuse the predictions of multiple LinearGNNs to ensure generalization to new graphs.
Empirically, GraphAny trained on the Wisconsin dataset with only 120 labeled nodes can effectively generalize to 30 new graphs with an average accuracy of 67.26% in an inductive manner.
arXiv Detail & Related papers (2024-05-30T19:43:29Z) - MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP
Initialization [51.76758674012744]
Training graph neural networks (GNNs) on large graphs is complex and extremely time consuming.
We propose an embarrassingly simple, yet hugely effective method for GNN training acceleration, called PeerInit.
arXiv Detail & Related papers (2022-09-30T21:33:51Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - GraphMLP: A Graph MLP-Like Architecture for 3D Human Pose Estimation [68.65764751482774]
GraphMLP is a global-local-graphical unified architecture for 3D human pose estimation.
It incorporates the graph structure of human bodies into a model to meet the domain-specific demand of the 3D human pose.
It can be extended to model complex temporal dynamics in a simple way with negligible computational cost gains in the sequence length.
arXiv Detail & Related papers (2022-06-13T18:59:31Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Neighborhood Random Walk Graph Sampling for Regularized Bayesian Graph
Convolutional Neural Networks [0.6236890292833384]
In this paper, we propose a novel algorithm called Bayesian Graph Convolutional Network using Neighborhood Random Walk Sampling (BGCN-NRWS)
BGCN-NRWS uses a Markov Chain Monte Carlo (MCMC) based graph sampling algorithm utilizing graph structure, reduces overfitting by using a variational inference layer, and yields consistently competitive classification results compared to the state-of-the-art in semi-supervised node classification.
arXiv Detail & Related papers (2021-12-14T20:58:27Z) - Modeling Graph Node Correlations with Neighbor Mixture Models [8.845058366817227]
We propose a new model, the Neighbor Mixture Model (NMM), for modeling node labels in a graph.
This model aims to capture correlations between the labels of nodes in a local neighborhood.
We show our proposed NMM advances the state-of-the-art in modeling real-world labeled graphs.
arXiv Detail & Related papers (2021-03-29T21:41:56Z) - On Graph Neural Networks versus Graph-Augmented MLPs [51.23890789522705]
Graph-Augmented Multi-Layer Perceptrons (GA-MLPs) first augments node features with certain multi-hop operators on the graph.
We prove a separation in expressive power between GA-MLPs and GNNs that grows exponentially in depth.
arXiv Detail & Related papers (2020-10-28T17:59:59Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.