Graph-based Tabular Deep Learning Should Learn Feature Interactions, Not Just Make Predictions
- URL: http://arxiv.org/abs/2510.04543v1
- Date: Mon, 06 Oct 2025 07:16:42 GMT
- Title: Graph-based Tabular Deep Learning Should Learn Feature Interactions, Not Just Make Predictions
- Authors: Elias Dubbeldam, Reza Mohammadi, Marit Schoonhoven, S. Ilker Birbil,
- Abstract summary: This paper argues that GTDL should move beyond prediction-centric objectives and prioritize the explicit learning and evaluation of feature interactions.<n>Using synthetic datasets with known ground-truth graph structures, we show that existing GTDL methods fail to recover meaningful feature interactions.<n>We call for a shift toward structure-aware modeling as a foundation for building GTDL systems that are interpretable, trustworthy, and grounded in domain understanding.
- Score: 2.312692134587988
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite recent progress, deep learning methods for tabular data still struggle to compete with traditional tree-based models. A key challenge lies in modeling complex, dataset-specific feature interactions that are central to tabular data. Graph-based tabular deep learning (GTDL) methods aim to address this by representing features and their interactions as graphs. However, existing methods predominantly optimize predictive accuracy, neglecting accurate modeling of the graph structure. This position paper argues that GTDL should move beyond prediction-centric objectives and prioritize the explicit learning and evaluation of feature interactions. Using synthetic datasets with known ground-truth graph structures, we show that existing GTDL methods fail to recover meaningful feature interactions. Moreover, enforcing the true interaction structure improves predictive performance. This highlights the need for GTDL methods to prioritize quantitative evaluation and accurate structural learning. We call for a shift toward structure-aware modeling as a foundation for building GTDL systems that are not only accurate but also interpretable, trustworthy, and grounded in domain understanding.
Related papers
- GILT: An LLM-Free, Tuning-Free Graph Foundational Model for In-Context Learning [50.40400074353263]
Graph Neural Networks (GNNs) are powerful tools for precessing relational data but often struggle to generalize to unseen graphs.<n>We introduce textbfGraph textbfIn-context textbfL textbfTransformer (GILT), a framework built on an LLM-free and tuning-free architecture.
arXiv Detail & Related papers (2025-10-06T08:09:15Z) - From Features to Structure: Task-Aware Graph Construction for Relational and Tabular Learning with GNNs [6.0757501646966965]
We introduce auGraph, a unified framework for task-aware graph augmentation.<n> auGraph enhances base graph structures by selectively promoting attributes into nodes.<n>It preserves the original data schema while injecting task-relevant structural signal.
arXiv Detail & Related papers (2025-06-02T20:42:53Z) - An Automatic Graph Construction Framework based on Large Language Models for Recommendation [49.51799417575638]
We introduce AutoGraph, an automatic graph construction framework based on large language models for recommendation.<n>LLMs infer the user preference and item knowledge, which is encoded as semantic vectors.<n>Latent factors are incorporated as extra nodes to link the user/item nodes, resulting in a graph with in-depth global-view semantics.
arXiv Detail & Related papers (2024-12-24T07:51:29Z) - Verbalized Graph Representation Learning: A Fully Interpretable Graph Model Based on Large Language Models Throughout the Entire Process [8.820909397907274]
We propose a verbalized graph representation learning (VGRL) method which is fully interpretable.
In contrast to traditional graph machine learning models, VGRL constrains this parameter space to be text description.
We conduct several studies to empirically evaluate the effectiveness of VGRL.
arXiv Detail & Related papers (2024-10-02T12:07:47Z) - How Data Inter-connectivity Shapes LLMs Unlearning: A Structural Unlearning Perspective [29.924482732745954]
Existing approaches assume data points to-be-forgotten are independent, ignoring their inter-connectivity.<n>We propose PISTOL, a method for compiling structural datasets.
arXiv Detail & Related papers (2024-06-24T17:22:36Z) - G-SAP: Graph-based Structure-Aware Prompt Learning over Heterogeneous Knowledge for Commonsense Reasoning [8.02547453169677]
We propose a novel Graph-based Structure-Aware Prompt Learning Model for commonsense reasoning, named G-SAP.
In particular, an evidence graph is constructed by integrating multiple knowledge sources, i.e. ConceptNet, Wikipedia, and Cambridge Dictionary.
The results reveal a significant advancement over the existing models, especially, with 6.12% improvement over the SoTA LM+GNNs model on the OpenbookQA dataset.
arXiv Detail & Related papers (2024-05-09T08:28:12Z) - Revisiting Link Prediction: A Data Perspective [59.296773787387224]
Link prediction, a fundamental task on graphs, has proven indispensable in various applications, e.g., friend recommendation, protein analysis, and drug interaction prediction.
Evidence in existing literature underscores the absence of a universally best algorithm suitable for all datasets.
We recognize three fundamental factors critical to link prediction: local structural proximity, global structural proximity, and feature proximity.
arXiv Detail & Related papers (2023-10-01T21:09:59Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Relate and Predict: Structure-Aware Prediction with Jointly Optimized
Neural DAG [13.636680313054631]
We propose a deep neural network framework, dGAP, to learn neural dependency Graph and optimize structure-Aware target Prediction.
dGAP trains towards a structure self-supervision loss and a target prediction loss jointly.
We empirically evaluate dGAP on multiple simulated and real datasets.
arXiv Detail & Related papers (2021-03-03T13:55:12Z) - Deep Reinforcement Learning of Graph Matching [63.469961545293756]
Graph matching (GM) under node and pairwise constraints has been a building block in areas from optimization to computer vision.
We present a reinforcement learning solver for GM i.e. RGM that seeks the node correspondence between pairwise graphs.
Our method differs from the previous deep graph matching model in the sense that they are focused on the front-end feature extraction and affinity function learning.
arXiv Detail & Related papers (2020-12-16T13:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.