TabGNN: Multiplex Graph Neural Network for Tabular Data Prediction
- URL: http://arxiv.org/abs/2108.09127v1
- Date: Fri, 20 Aug 2021 11:51:32 GMT
- Title: TabGNN: Multiplex Graph Neural Network for Tabular Data Prediction
- Authors: Xiawei Guo, Yuhan Quan, Huan Zhao, Quanming Yao, Yong Li, Weiwei Tu
- Abstract summary: We propose a novel framework TabGNN based on recently popular graph neural networks (GNN)
Specifically, we firstly construct a multiplex graph to model the multifaceted sample relations, and then design a multiplex graph neural network to learn enhanced representation for each sample.
Experiments on eleven TDP datasets from various domains, including classification and regression ones, show that TabGNN can consistently improve the performance.
- Score: 43.35301059378836
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Tabular data prediction (TDP) is one of the most popular industrial
applications, and various methods have been designed to improve the prediction
performance. However, existing works mainly focus on feature interactions and
ignore sample relations, e.g., users with the same education level might have a
similar ability to repay the debt. In this work, by explicitly and
systematically modeling sample relations, we propose a novel framework TabGNN
based on recently popular graph neural networks (GNN). Specifically, we firstly
construct a multiplex graph to model the multifaceted sample relations, and
then design a multiplex graph neural network to learn enhanced representation
for each sample. To integrate TabGNN with the tabular solution in our company,
we concatenate the learned embeddings and the original ones, which are then fed
to prediction models inside the solution. Experiments on eleven TDP datasets
from various domains, including classification and regression ones, show that
TabGNN can consistently improve the performance compared to the tabular
solution AutoFE in 4Paradigm.
Related papers
- Novel Representation Learning Technique using Graphs for Performance
Analytics [0.0]
We propose a novel idea of transforming performance data into graphs to leverage the advancement of Graph Neural Network-based (GNN) techniques.
In contrast to other Machine Learning application domains, such as social networks, the graph is not given; instead, we need to build it.
We evaluate the effectiveness of the generated embeddings from GNNs based on how well they make even a simple feed-forward neural network perform for regression tasks.
arXiv Detail & Related papers (2024-01-19T16:34:37Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - TabGSL: Graph Structure Learning for Tabular Data Prediction [10.66048003460524]
We present a novel solution, Tabular Graph Structure Learning (TabGSL), to enhance tabular data prediction.
Experiments conducted on 30 benchmark datasets demonstrate that TabGSL markedly outperforms both tree-based models and recent deep learning-based models.
arXiv Detail & Related papers (2023-05-25T08:33:48Z) - GCondNet: A Novel Method for Improving Neural Networks on Small High-Dimensional Tabular Data [14.124731264553889]
We propose GCondNet to enhance neural networks by leveraging implicit structures present in data.
GCondNet exploits the data's high-dimensionality, and thus improves the performance of an underlying predictor network.
We demonstrate GCondNet's effectiveness on 12 real-world datasets, where it outperforms 14 standard and state-of-the-art methods.
arXiv Detail & Related papers (2022-11-11T16:13:34Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Bag Graph: Multiple Instance Learning using Bayesian Graph Neural
Networks [22.07812381907525]
Multiple Instance Learning (MIL) is a weakly supervised learning problem where the aim is to assign labels to sets or bags of instances.
Recent work has shown promising results for neural network models in the MIL setting.
We consider modelling the interactions between bags using a graph and employ Graph Neural Networks (GNNs) to facilitate end-to-end learning.
arXiv Detail & Related papers (2022-02-22T19:16:44Z) - CopulaGNN: Towards Integrating Representational and Correlational Roles
of Graphs in Graph Neural Networks [23.115288017590093]
We investigate how Graph Neural Network (GNN) models can effectively leverage both types of information.
The proposed Copula Graph Neural Network (CopulaGNN) can take a wide range of GNN models as base models.
arXiv Detail & Related papers (2020-10-05T15:20:04Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.