Dynamic Network Reconfiguration for Entropy Maximization using Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2205.13578v1
- Date: Thu, 26 May 2022 18:44:22 GMT
- Title: Dynamic Network Reconfiguration for Entropy Maximization using Deep
Reinforcement Learning
- Authors: Christoffel Doorman, Victor-Alexandru Darvariu, Stephen Hailes, Mirco
Musolesi
- Abstract summary: Key problem in network theory is how to reconfigure a graph in order to optimize a quantifiable objective.
In this paper, we cast the problem of network rewiring for optimizing a specified structural property as a Markov Decision Process (MDP)
We then propose a general approach based on the Deep Q-Network (DQN) algorithm and graph neural networks (GNNs) that can efficiently learn strategies for rewiring networks.
- Score: 3.012947865628207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key problem in network theory is how to reconfigure a graph in order to
optimize a quantifiable objective. Given the ubiquity of networked systems,
such work has broad practical applications in a variety of situations, ranging
from drug and material design to telecommunications. The large decision space
of possible reconfigurations, however, makes this problem computationally
intensive. In this paper, we cast the problem of network rewiring for
optimizing a specified structural property as a Markov Decision Process (MDP),
in which a decision-maker is given a budget of modifications that are performed
sequentially. We then propose a general approach based on the Deep Q-Network
(DQN) algorithm and graph neural networks (GNNs) that can efficiently learn
strategies for rewiring networks. We then discuss a cybersecurity case study,
i.e., an application to the computer network reconfiguration problem for
intrusion protection. In a typical scenario, an attacker might have a (partial)
map of the system they plan to penetrate; if the network is effectively
"scrambled", they would not be able to navigate it since their prior knowledge
would become obsolete. This can be viewed as an entropy maximization problem,
in which the goal is to increase the surprise of the network. Indeed, entropy
acts as a proxy measurement of the difficulty of navigating the network
topology. We demonstrate the general ability of the proposed method to obtain
better entropy gains than random rewiring on synthetic and real-world graphs
while being computationally inexpensive, as well as being able to generalize to
larger graphs than those seen during training. Simulations of attack scenarios
confirm the effectiveness of the learned rewiring strategies.
Related papers
- Online Learning Of Expanding Graphs [14.952056744888916]
This paper addresses the problem of online network inference for expanding graphs from a stream of signals.
We introduce a strategy that enables different types of updates for nodes that just joined the network and for previously existing nodes.
arXiv Detail & Related papers (2024-09-13T09:20:42Z) - Structural Generalization in Autonomous Cyber Incident Response with Message-Passing Neural Networks and Reinforcement Learning [0.0]
Retraining agents for small network changes costs time and energy.
We create variants of the original network with different numbers of hosts and agents are tested without additional training.
Agents using the default vector state representation perform better, but need to be specially trained on each network variant.
arXiv Detail & Related papers (2024-07-08T09:34:22Z) - Queue-aware Network Control Algorithm with a High Quantum Computing Readiness-Evaluated in Discrete-time Flow Simulator for Fat-Pipe Networks [0.0]
We introduce a resource reoccupation algorithm for traffic engineering in wide-area networks.
The proposed optimization algorithm changes traffic steering and resource allocation in case of overloaded transceivers.
We show that our newly introduced network simulator enables analyses of short-time effects like buffering within fat-pipe networks.
arXiv Detail & Related papers (2024-04-05T13:13:02Z) - Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - PirateNets: Physics-informed Deep Learning with Residual Adaptive
Networks [19.519831541375144]
We introduce Physics-informed Residual Adaptive Networks (PirateNets) to facilitate stable and efficient training of deep PINN models.
PirateNets leverage a novel adaptive residual connection, which allows the networks to be as shallow networks that progressively deepen during training.
We show that PirateNets are easier to optimize and can gain accuracy from considerably increased depth, ultimately achieving state-of-the-art results across various benchmarks.
arXiv Detail & Related papers (2024-02-01T04:17:56Z) - What Planning Problems Can A Relational Neural Network Solve? [91.53684831950612]
We present a circuit complexity analysis for relational neural networks representing policies for planning problems.
We show that there are three general classes of planning problems, in terms of the growth of circuit width and depth.
We also illustrate the utility of this analysis for designing neural networks for policy learning.
arXiv Detail & Related papers (2023-12-06T18:47:28Z) - Enhancing Network Resilience through Machine Learning-powered Graph
Combinatorial Optimization: Applications in Cyber Defense and Information
Diffusion [0.0]
This thesis focuses on developing effective approaches for enhancing network resilience.
Existing approaches for enhancing network resilience emphasize on determining bottleneck nodes and edges in the network.
This thesis aims to design effective, efficient and scalable techniques for discovering bottleneck nodes and edges in the network.
arXiv Detail & Related papers (2023-09-22T01:48:28Z) - Multi-agent Reinforcement Learning with Graph Q-Networks for Antenna
Tuning [60.94661435297309]
The scale of mobile networks makes it challenging to optimize antenna parameters using manual intervention or hand-engineered strategies.
We propose a new multi-agent reinforcement learning algorithm to optimize mobile network configurations globally.
We empirically demonstrate the performance of the algorithm on an antenna tilt tuning problem and a joint tilt and power control problem in a simulated environment.
arXiv Detail & Related papers (2023-01-20T17:06:34Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.