An open source Multi-Agent Deep Reinforcement Learning Routing Simulator for satellite networks
- URL: http://arxiv.org/abs/2407.11047v2
- Date: Thu, 28 Nov 2024 08:46:42 GMT
- Title: An open source Multi-Agent Deep Reinforcement Learning Routing Simulator for satellite networks
- Authors: Federico Lozano-Cuadra, Mathias D. Thorsager, Israel Leyva-Mayorga, Beatriz Soret,
- Abstract summary: This paper introduces an open source simulator for packet routing in Low Earth Orbit Satellite Constellations (LSatCs)
The simulator, implemented in Python, supports traditional Dijkstra's based routing as well as more advanced learning solutions.
Results highlight significant improvements in end-to-end (E2E) latency using Reinforcement Learning (RL)-based routing policies.
- Score: 7.635788661450053
- License:
- Abstract: This paper introduces an open source simulator for packet routing in Low Earth Orbit Satellite Constellations (LSatCs) considering the dynamic system uncertainties. The simulator, implemented in Python, supports traditional Dijkstra's based routing as well as more advanced learning solutions, specifically Q-Routing and Multi-Agent Deep Reinforcement Learning (MA-DRL) from our previous work. It uses an event-based approach with the SimPy module to accurately simulate packet creation, routing and queuing, providing real-time tracking of queues and latency. The simulator is highly configurable, allowing adjustments in routing policies, traffic, ground and space layer topologies, communication parameters, and learning hyperparameters. Key features include the ability to visualize system motion and track packet paths. Results highlight significant improvements in end-to-end (E2E) latency using Reinforcement Learning (RL)-based routing policies compared to traditional methods. The source code, the documentation and a Jupyter notebook with post-processing results and analysis are available on GitHub.
Related papers
- Learning Sub-Second Routing Optimization in Computer Networks requires Packet-Level Dynamics [15.018408728324887]
Reinforcement Learning can help to learn network representations that provide routing decisions.
We present $textitPackeRL$, the first packet-level Reinforcement Learning environment for routing in generic network topologies.
We also introduce two new algorithms for learning sub-second Routing Optimization.
arXiv Detail & Related papers (2024-10-14T11:03:46Z) - Intelligent Routing Algorithm over SDN: Reusable Reinforcement Learning Approach [1.799933345199395]
We develop a reusable RL-aware, reusable routing algorithm, RLSR-Routing over SDN.
Our algorithm shows better performance in terms of load balancing than the traditional approaches.
It also has faster convergence than the non-reusable RL approach when finding paths for multiple traffic demands.
arXiv Detail & Related papers (2024-09-23T17:15:24Z) - Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - Cross-domain Learning Framework for Tracking Users in RIS-aided Multi-band ISAC Systems with Sparse Labeled Data [55.70071704247794]
Integrated sensing and communications (ISAC) is pivotal for 6G communications and is boosted by the rapid development of reconfigurable intelligent surfaces (RISs)
This paper proposes the X2Track framework, where we model the tracking function by a hierarchical architecture, jointly utilizing multi-modal CSI indicators across multiple bands, and optimize it in a cross-domain manner.
Under X2Track, we design an efficient deep learning algorithm to minimize tracking errors, based on transformer neural networks and adversarial learning techniques.
arXiv Detail & Related papers (2024-05-10T08:04:27Z) - Actor-Critic Scheduling for Path-Aware Air-to-Ground Multipath
Multimedia Delivery [5.01187288554981]
We present a novel scheduler for real-time multimedia delivery in multipath systems based on an Actor-Critic (AC) RL algorithm.
The scheduler acting as an RL agent learns in real-time the optimal policy for path selection, path rate allocation and redundancy estimation for flow protection.
arXiv Detail & Related papers (2022-04-28T08:28:25Z) - Deep Learning Aided Packet Routing in Aeronautical Ad-Hoc Networks
Relying on Real Flight Data: From Single-Objective to Near-Pareto
Multi-Objective Optimization [79.96177511319713]
We invoke deep learning (DL) to assist routing in aeronautical ad-hoc networks (AANETs)
A deep neural network (DNN) is conceived for mapping the local geographic information observed by the forwarding node into the information required for determining the optimal next hop.
We extend the DL-aided routing algorithm to a multi-objective scenario, where we aim for simultaneously minimizing the delay, maximizing the path capacity, and maximizing the path lifetime.
arXiv Detail & Related papers (2021-10-28T14:18:22Z) - Better than the Best: Gradient-based Improper Reinforcement Learning for
Network Scheduling [60.48359567964899]
We consider the problem of scheduling in constrained queueing networks with a view to minimizing packet delay.
We use a policy gradient based reinforcement learning algorithm that produces a scheduler that performs better than the available atomic policies.
arXiv Detail & Related papers (2021-05-01T10:18:34Z) - Tracking Performance of Online Stochastic Learners [57.14673504239551]
Online algorithms are popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.
When a constant step-size is used, these algorithms also have the ability to adapt to drifts in problem parameters, such as data or model properties, and track the optimal solution with reasonable accuracy.
We establish a link between steady-state performance derived under stationarity assumptions and the tracking performance of online learners under random walk models.
arXiv Detail & Related papers (2020-04-04T14:16:27Z) - Towards Cognitive Routing based on Deep Reinforcement Learning [17.637357380527583]
We propose a definition of cognitive routing and an implementation approach based on Deep Reinforcement Learning (DRL)
To facilitate the research of DRL-based cognitive routing, we introduce a simulator named RL4Net for DRL-based routing algorithm development and simulation.
The simulation results on an example network topology show that the DDPG-based routing algorithm achieves better performance than OSPF and random weight algorithms.
arXiv Detail & Related papers (2020-03-19T03:32:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.