Graph Neural Network contextual embedding for Deep Learning on Tabular
Data
- URL: http://arxiv.org/abs/2303.06455v2
- Date: Tue, 4 Jul 2023 15:33:45 GMT
- Title: Graph Neural Network contextual embedding for Deep Learning on Tabular
Data
- Authors: Mario Villaiz\'an-Vallelado, Matteo Salvatori, Bel\'en Carro Martinez,
Antonio Javier Sanchez Esguevillas
- Abstract summary: Deep Learning (DL) has constituted a major breakthrough for AI in fields related to human skills like natural language processing.
This paper presents a novel DL model using Graph Neural Network (GNN) more specifically Interaction Network (IN)
Its results outperform those of a recently published survey with DL benchmark based on five public datasets, also achieving competitive results when compared to boosted-tree solutions.
- Score: 0.45880283710344055
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: All industries are trying to leverage Artificial Intelligence (AI) based on
their existing big data which is available in so called tabular form, where
each record is composed of a number of heterogeneous continuous and categorical
columns also known as features. Deep Learning (DL) has constituted a major
breakthrough for AI in fields related to human skills like natural language
processing, but its applicability to tabular data has been more challenging.
More classical Machine Learning (ML) models like tree-based ensemble ones
usually perform better. This paper presents a novel DL model using Graph Neural
Network (GNN) more specifically Interaction Network (IN), for contextual
embedding and modelling interactions among tabular features. Its results
outperform those of a recently published survey with DL benchmark based on five
public datasets, also achieving competitive results when compared to
boosted-tree solutions.
Related papers
- Escaping the Forest: Sparse Interpretable Neural Networks for Tabular Data [0.0]
We show that our models, Sparse TABular NET or sTAB-Net with attention mechanisms, are more effective than tree-based models.
They achieve better performance than post-hoc methods like SHAP.
arXiv Detail & Related papers (2024-10-23T10:50:07Z) - TabKANet: Tabular Data Modeling with Kolmogorov-Arnold Network and Transformer [12.237450884462888]
TabKANet is a model for learning from numerical content.
It has superior performance compared to Neural Networks (NNs)
Our code is publicly available on GitHub.
arXiv Detail & Related papers (2024-09-13T13:14:54Z) - Making Pre-trained Language Models Great on Tabular Prediction [50.70574370855663]
The transferability of deep neural networks (DNNs) has made significant progress in image and language processing.
We present TP-BERTa, a specifically pre-trained LM for tabular data prediction.
A novel relative magnitude tokenization converts scalar numerical feature values to finely discrete, high-dimensional tokens, and an intra-feature attention approach integrates feature values with the corresponding feature names.
arXiv Detail & Related papers (2024-03-04T08:38:56Z) - Relational Deep Learning: Graph Representation Learning on Relational
Databases [69.7008152388055]
We introduce an end-to-end representation approach to learn on data laid out across multiple tables.
Message Passing Graph Neural Networks can then automatically learn across the graph to extract representations that leverage all data input.
arXiv Detail & Related papers (2023-12-07T18:51:41Z) - Homological Convolutional Neural Networks [4.615338063719135]
We propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations.
We test our model on 18 benchmark datasets against 5 classic machine learning and 3 deep learning models.
arXiv Detail & Related papers (2023-08-26T08:48:51Z) - Why do tree-based models still outperform deep learning on tabular data? [0.0]
We show that tree-based models remain state-of-the-art on medium-sized data.
We conduct an empirical investigation into the differing inductive biases of tree-based models and Neural Networks (NNs)
arXiv Detail & Related papers (2022-07-18T08:36:08Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.