Hypernetwork Dismantling via Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2104.14332v1
- Date: Thu, 29 Apr 2021 13:35:29 GMT
- Title: Hypernetwork Dismantling via Deep Reinforcement Learning
- Authors: Dengcheng Yan, Wenxin Xie, Yiwen Zhang
- Abstract summary: We formulate the hypernetwork dismantling problem as a node sequence decision problem.
We propose a deep reinforcement learning-based hypernetwork dismantling framework.
Experimental results on five real-world hypernetworks demonstrate the effectiveness of our proposed framework.
- Score: 1.4877837830677472
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Network dismantling aims to degrade the connectivity of a network by removing
an optimal set of nodes and has been widely adopted in many real-world
applications such as epidemic control and rumor containment. However,
conventional methods usually focus on simple network modeling with only
pairwise interactions, while group-wise interactions modeled by hypernetwork
are ubiquitous and critical. In this work, we formulate the hypernetwork
dismantling problem as a node sequence decision problem and propose a deep
reinforcement learning (DRL)-based hypernetwork dismantling framework. Besides,
we design a novel inductive hypernetwork embedding method to ensure the
transferability to various real-world hypernetworks. Generally, our framework
builds an agent. It first generates small-scale synthetic hypernetworks and
embeds the nodes and hypernetworks into a low dimensional vector space to
represent the action and state space in DRL, respectively. Then trial-and-error
dismantling tasks are conducted by the agent on these synthetic hypernetworks,
and the dismantling strategy is continuously optimized. Finally, the
well-optimized strategy is applied to real-world hypernetwork dismantling
tasks. Experimental results on five real-world hypernetworks demonstrate the
effectiveness of our proposed framework.
Related papers
- Scalable spectral representations for network multiagent control [53.631272539560435]
A popular model for multi-agent control, Network Markov Decision Processes (MDPs) pose a significant challenge to efficient learning.
We first derive scalable spectral local representations for network MDPs, which induces a network linear subspace for the local $Q$-function of each agent.
We design a scalable algorithmic framework for continuous state-action network MDPs, and provide end-to-end guarantees for the convergence of our algorithm.
arXiv Detail & Related papers (2024-10-22T17:45:45Z) - Coupling Light with Matter for Identifying Dominant Subnetworks [0.0]
We present a novel light-matter platform that uses complex neural networks to identify dominantworks and uncover indirect correlations within larger networks.
This approach offers significant advantages, including low energy consumption, high processing speed, and the immediate identification of co-valued and counter-regulated nodes without post-processing.
arXiv Detail & Related papers (2024-05-27T16:00:21Z) - Hierarchical Multi-Marginal Optimal Transport for Network Alignment [52.206006379563306]
Multi-network alignment is an essential prerequisite for joint learning on multiple networks.
We propose a hierarchical multi-marginal optimal transport framework named HOT for multi-network alignment.
Our proposed HOT achieves significant improvements over the state-of-the-art in both effectiveness and scalability.
arXiv Detail & Related papers (2023-10-06T02:35:35Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Magnitude Invariant Parametrizations Improve Hypernetwork Learning [0.0]
Hypernetworks are powerful neural networks that predict the parameters of another neural network.
Training typically converges far more slowly than for non-hypernetwork models.
We identify a fundamental and previously unidentified problem that contributes to the challenge of training hypernetworks.
We present a simple solution to this problem using a revised hypernetwork formulation that we call Magnitude Invariant Parametrizations (MIP)
arXiv Detail & Related papers (2023-04-15T22:18:29Z) - Continual Learning with Dependency Preserving Hypernetworks [14.102057320661427]
An effective approach to address continual learning (CL) problems is to use hypernetworks which generate task dependent weights for a target network.
We propose a novel approach that uses a dependency preserving hypernetwork to generate weights for the target network while also maintaining the parameter efficiency.
In addition, we propose novel regularisation and network growth techniques for the RNN based hypernetwork to further improve the continual learning performance.
arXiv Detail & Related papers (2022-09-16T04:42:21Z) - Cascaded Compressed Sensing Networks: A Reversible Architecture for
Layerwise Learning [11.721183551822097]
We show that target propagation could be achieved by modeling the network s each layer with compressed sensing, without the need of auxiliary networks.
Experiments show that the proposed method could achieve better performance than the auxiliary network-based method.
arXiv Detail & Related papers (2021-10-20T05:21:13Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - On Infinite-Width Hypernetworks [101.03630454105621]
We show that hypernetworks do not guarantee to a global minima under descent.
We identify the functional priors of these architectures by deriving their corresponding GP and NTK kernels.
As part of this study, we make a mathematical contribution by deriving tight bounds on high order Taylor terms of standard fully connected ReLU networks.
arXiv Detail & Related papers (2020-03-27T00:50:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.