Interpretable Reinforcement Learning for Load Balancing using Kolmogorov-Arnold Networks
- URL: http://arxiv.org/abs/2505.14459v1
- Date: Tue, 20 May 2025 14:56:31 GMT
- Title: Interpretable Reinforcement Learning for Load Balancing using Kolmogorov-Arnold Networks
- Authors: Kamal Singh, Sami Marouani, Ahmad Al Sheikh, Pham Tran Anh Quang, Amaury Habrard,
- Abstract summary: Reinforcement learning (RL) has been increasingly applied to network control problems, such as load balancing.<n>Existing RL approaches often suffer from lack of interpretability and difficulty in extracting controller equations.<n>We propose the use of Kolmogorov-Arnold Networks (KAN) for interpretable RL in network control.
- Score: 6.373998211961586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning (RL) has been increasingly applied to network control problems, such as load balancing. However, existing RL approaches often suffer from lack of interpretability and difficulty in extracting controller equations. In this paper, we propose the use of Kolmogorov-Arnold Networks (KAN) for interpretable RL in network control. We employ a PPO agent with a 1-layer actor KAN model and an MLP Critic network to learn load balancing policies that maximise throughput utility, minimize loss as well as delay. Our approach allows us to extract controller equations from the learned neural networks, providing insights into the decision-making process. We evaluate our approach using different reward functions demonstrating its effectiveness in improving network performance while providing interpretable policies.
Related papers
- Online Training and Pruning of Deep Reinforcement Learning Networks [0.0]
Scaling deep neural networks (NN) of reinforcement learning (RL) algorithms has been shown to enhance performance when feature extraction networks are used.<n>We propose an approach to integrate simultaneous training and pruning within advanced RL methods.
arXiv Detail & Related papers (2025-07-16T07:17:41Z) - Learning to Control Dynamical Agents via Spiking Neural Networks and Metropolis-Hastings Sampling [1.0533738606966752]
Spiking Neural Networks (SNNs) offer biologically inspired, energy-efficient alternatives to traditional Deep Neural Networks (DNNs) for real-time control systems.<n>We introduce what is, to our knowledge, the first framework that employs Metropolis-Hastings sampling, a Bayesian inference technique, to train SNNs for dynamical agent control in RL environments.
arXiv Detail & Related papers (2025-07-13T08:50:00Z) - Invariant Control Strategies for Active Flow Control using Graph Neural Networks [0.0]
We introduce graph neural networks (GNNs) as a promising architecture forReinforcement Learning (RL)-based flow control.<n>GNNs process unstructured, threedimensional flow data, preserving spatial relationships without the constraints of a Cartesian grid.<n>We show that GNN-based control policies achieve comparable performance to existing methods while benefiting from improved generalization properties.
arXiv Detail & Related papers (2025-03-28T09:33:40Z) - Joint Admission Control and Resource Allocation of Virtual Network Embedding via Hierarchical Deep Reinforcement Learning [69.00997996453842]
We propose a deep Reinforcement Learning approach to learn a joint Admission Control and Resource Allocation policy for virtual network embedding.
We show that HRL-ACRA outperforms state-of-the-art baselines in terms of both the acceptance ratio and long-term average revenue.
arXiv Detail & Related papers (2024-06-25T07:42:30Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.<n>Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - A Neuromorphic Architecture for Reinforcement Learning from Real-Valued
Observations [0.34410212782758043]
Reinforcement Learning (RL) provides a powerful framework for decision-making in complex environments.
This paper presents a novel Spiking Neural Network (SNN) architecture for solving RL problems with real-valued observations.
arXiv Detail & Related papers (2023-07-06T12:33:34Z) - Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning
Framework for Congestion Control in Tactical Environments [53.08686495706487]
This paper proposes an RL framework that leverages an accurate and parallelizable emulation environment to reenact the conditions of a tactical network.
We evaluate our RL learning framework by training a MARLIN agent in conditions replicating a bottleneck link transition between a Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
arXiv Detail & Related papers (2023-06-27T16:15:15Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Learning Resilient Radio Resource Management Policies with Graph Neural
Networks [124.89036526192268]
We formulate a resilient radio resource management problem with per-user minimum-capacity constraints.
We show that we can parameterize the user selection and power control policies using a finite set of parameters.
Thanks to such adaptation, our proposed method achieves a superior tradeoff between the average rate and the 5th percentile rate.
arXiv Detail & Related papers (2022-03-07T19:40:39Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z) - RL-QN: A Reinforcement Learning Framework for Optimal Control of
Queueing Systems [8.611328447624677]
We consider using model-based reinforcement learning (RL) to learn the optimal control policy for queueing networks.
Traditional approaches in RL, however, cannot handle the unbounded state spaces of the network control problem.
We propose a new algorithm, called Reinforcement Learning for Queueing Networks (RL-QN), which applies model-based RL methods over a finite subset of the state space.
arXiv Detail & Related papers (2020-11-14T22:12:27Z) - Deep Reinforcement Learning with Population-Coded Spiking Neural Network
for Continuous Control [0.0]
We propose a population-coded spiking actor network (PopSAN) trained in conjunction with a deep critic network using deep reinforcement learning (DRL)
We deployed the trained PopSAN on Intel's Loihi neuromorphic chip and benchmarked our method against the mainstream DRL algorithms for continuous control.
Our results support the efficiency of neuromorphic controllers and suggest our hybrid RL as an alternative to deep learning, when both energy-efficiency and robustness are important.
arXiv Detail & Related papers (2020-10-19T16:20:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.