From GNNs to Symbolic Surrogates via Kolmogorov-Arnold Networks for Delay Prediction
- URL: http://arxiv.org/abs/2512.20885v1
- Date: Wed, 24 Dec 2025 02:05:46 GMT
- Title: From GNNs to Symbolic Surrogates via Kolmogorov-Arnold Networks for Delay Prediction
- Authors: Sami Marouani, Kamal Singh, Baptiste Jeudy, Amaury Habrard,
- Abstract summary: We implement a heterogeneous GNN with attention-based message passing, establishing a strong neural baseline.<n>Second, we propose FlowKANet in which Kolmogorov-Arnold Networks replace standard layers, reducing trainable parameters.<n>Third, we distill the model into symbolic surrogate models using block-wise regression, producing closed-form equations that eliminate trainable computation.
- Score: 3.571534406261392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate prediction of flow delay is essential for optimizing and managing modern communication networks. We investigate three levels of modeling for this task. First, we implement a heterogeneous GNN with attention-based message passing, establishing a strong neural baseline. Second, we propose FlowKANet in which Kolmogorov-Arnold Networks replace standard MLP layers, reducing trainable parameters while maintaining competitive predictive performance. FlowKANet integrates KAMP-Attn (Kolmogorov-Arnold Message Passing with Attention), embedding KAN operators directly into message-passing and attention computation. Finally, we distill the model into symbolic surrogate models using block-wise regression, producing closed-form equations that eliminate trainable weights while preserving graph-structured dependencies. The results show that KAN layers provide a favorable trade-off between efficiency and accuracy and that symbolic surrogates emphasize the potential for lightweight deployment and enhanced transparency.
Related papers
- Graph Signal Generative Diffusion Models [74.75869068073577]
We introduce U-shaped encoder-decoder graph neural networks (U-GNNs) for graph signal generation using denoising diffusion processes.<n>The architecture learns node features at different resolutions with skip connections between the encoder and decoder paths.<n>We demonstrate the effectiveness of the diffusion model in probabilistic forecasting of stock prices.
arXiv Detail & Related papers (2025-09-21T21:57:27Z) - Resource-Aware Neural Network Pruning Using Graph-based Reinforcement Learning [0.8890833546984916]
This paper presents a novel approach to neural network pruning by integrating a graph-based observation space into an AutoML framework.<n>Our framework transforms the pruning process by introducing a graph representation of the target neural network.<n>For the action space we transition from continuous pruning ratios to fine-grained binary action spaces.
arXiv Detail & Related papers (2025-09-04T15:05:05Z) - ReDiSC: A Reparameterized Masked Diffusion Model for Scalable Node Classification with Structured Predictions [64.17845687013434]
We propose ReDiSC, a structured diffusion model for structured node classification.<n>We show that ReDiSC achieves superior or highly competitive performance compared to state-of-the-art GNN, label propagation, and diffusion-based baselines.<n> Notably, ReDiSC scales effectively to large-scale datasets on which previous structured diffusion methods fail due to computational constraints.
arXiv Detail & Related papers (2025-07-19T04:46:53Z) - Channel Fingerprint Construction for Massive MIMO: A Deep Conditional Generative Approach [65.47969413708344]
We introduce the concept of CF twins and design a conditional generative diffusion model (CGDM)<n>We employ a variational inference technique to derive the evidence lower bound (ELBO) for the log-marginal distribution of the observed fine-grained CF conditioned on the coarse-grained CF.<n>We show that the proposed approach exhibits significant improvement in reconstruction performance compared to the baselines.
arXiv Detail & Related papers (2025-05-12T01:36:06Z) - Lattice-Based Pruning in Recurrent Neural Networks via Poset Modeling [0.0]
Recurrent neural networks (RNNs) are central to sequence modeling tasks, yet their high computational complexity poses challenges for scalability and real-time deployment.<n>We introduce a novel framework that models RNNs as partially ordered sets (posets) and constructs corresponding dependency lattices.<n>By identifying meet irreducible neurons, our lattice-based pruning algorithm selectively retains critical connections while eliminating redundant ones.
arXiv Detail & Related papers (2025-02-23T10:11:38Z) - Imitation Learning of MPC with Neural Networks: Error Guarantees and Sparsification [5.260346080244568]
We present a framework for bounding the approximation error in imitation model predictive controllers utilizing neural networks.<n>We discuss how this method can be used to design a stable neural network controller with performance guarantees.
arXiv Detail & Related papers (2025-01-07T10:18:37Z) - Want to train KANS at scale? Now UKAN! [2.9666099400348607]
We present Unbounded Kolmogorov-Arnold Networks (UKANs), a method that removes the need for bounded grids in traditional Kolmogorov-Arnold Networks (KANs)<n>UKANs couple multilayer perceptrons with KANs by feeding the positional encoding of grid groups into the CG model, enabling function approximation on unbounded domains without requiring data normalization.
arXiv Detail & Related papers (2024-08-20T21:20:38Z) - Interpretable A-posteriori Error Indication for Graph Neural Network Surrogate Models [0.0]
This work introduces an interpretability enhancement procedure for graph neural networks (GNNs)
The end result is an interpretable GNN model that isolates regions in physical space, corresponding to sub-graphs, that are intrinsically linked to the forecasting task.
The interpretable GNNs can also be used to identify, during inference, graph nodes that correspond to a majority of the anticipated forecasting error.
arXiv Detail & Related papers (2023-11-13T18:37:07Z) - Compressed Regression over Adaptive Networks [58.79251288443156]
We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
arXiv Detail & Related papers (2023-04-07T13:41:08Z) - Graph-based Algorithm Unfolding for Energy-aware Power Allocation in
Wireless Networks [27.600081147252155]
We develop a novel graph sumable framework to maximize energy efficiency in wireless communication networks.
We show the permutation training which is a desirable property for models of wireless network data.
Results demonstrate its generalizability across different network topologies.
arXiv Detail & Related papers (2022-01-27T20:23:24Z) - Improve Generalization and Robustness of Neural Networks via Weight
Scale Shifting Invariant Regularizations [52.493315075385325]
We show that a family of regularizers, including weight decay, is ineffective at penalizing the intrinsic norms of weights for networks with homogeneous activation functions.
We propose an improved regularizer that is invariant to weight scale shifting and thus effectively constrains the intrinsic norm of a neural network.
arXiv Detail & Related papers (2020-08-07T02:55:28Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.