Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient
- URL: http://arxiv.org/abs/2110.09035v1
- Date: Mon, 18 Oct 2021 06:14:28 GMT
- Title: Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient
- Authors: Shanchao Yang, Kaili Ma, Baoxiang Wang, Hongyuan Zha
- Abstract summary: ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
- Score: 62.660451283548724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Improving the resilience of a network protects the system from natural
disasters and malicious attacks. This is typically achieved by introducing new
edges, which however may reach beyond the maximum number of connections a node
could sustain. Many studies then resort to the degree-preserving operation of
rewiring, which swaps existing edges $AC, BD$ to new edges $AB, CD$. A
significant line of studies focuses on this technique for theoretical and
practical results while leaving three limitations: network utility loss, local
optimality, and transductivity. In this paper, we propose ResiNet, a
reinforcement learning (RL)-based framework to discover resilient network
topologies against various disasters and attacks. ResiNet is objective agnostic
which allows the utility to be balanced by incorporating it into the objective
function. The local optimality, typically seen in greedy algorithms, is
addressed by casting the cumulative resilience gain into a sequential decision
process of step-wise rewiring. The transductivity, which refers to the
necessity to run a computationally intensive optimization for each input graph,
is lifted by our variant of RL with auto-regressive permutation-invariant
variable action space. ResiNet is armed by our technical innovation, Filtration
enhanced GNN (FireGNN), which distinguishes graphs with minor differences. It
is thus possible for ResiNet to capture local structure changes and adapt its
decision among consecutive graphs, which is known to be infeasible for GNN.
Extensive experiments demonstrate that with a small number of rewiring
operations, ResiNet achieves a near-optimal resilience gain on multiple graphs
while balancing the utility, with a large margin compared to existing
approaches.
Related papers
- Neural Network Verification with Branch-and-Bound for General Nonlinearities [63.39918329535165]
Branch-and-bound (BaB) is among the most effective methods for neural network (NN) verification.
We develop a general framework, named GenBaB, to conduct BaB for general nonlinearities in general computational graphs.
We demonstrate the effectiveness of our GenBaB on verifying a wide range of NNs, including networks with activation functions such as Sigmoid, Tanh, Sine and GeLU.
arXiv Detail & Related papers (2024-05-31T17:51:07Z) - Verifying message-passing neural networks via topology-based bounds tightening [3.3267518043390205]
We develop a computationally effective approach towards providing robust certificates for message-passing neural networks (MPNNs)
Because our work builds on mixed-integer optimization, it encodes a wide variety of subproblems.
We test on both node and graph classification problems and consider topological attacks that both add and remove edges.
arXiv Detail & Related papers (2024-02-21T17:05:27Z) - LinGCN: Structural Linearized Graph Convolutional Network for
Homomorphically Encrypted Inference [19.5669231249754]
We present LinGCN, a framework designed to reduce multiplication depth and optimize the performance of HE based GCN inference.
Remarkably, LinGCN achieves a 14.2x latency speedup relative to CryptoGCN, while preserving an inference accuracy of 75% and notably reducing multiplication depth.
arXiv Detail & Related papers (2023-09-25T17:56:54Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - GNN at the Edge: Cost-Efficient Graph Neural Network Processing over
Distributed Edge Servers [24.109721494781592]
Graph Neural Networks (GNNs) are still under exploration, presenting a stark disparity to its broad edge adoptions.
This paper studies the cost optimization for distributed GNN processing over a multi-tier heterogeneous edge network.
We show that our approach achieves superior performance over de facto baselines with more than 95.8% cost eduction in a fast convergence speed.
arXiv Detail & Related papers (2022-10-31T13:03:16Z) - Trainability Preserving Neural Structured Pruning [64.65659982877891]
We present trainability preserving pruning (TPP), a regularization-based structured pruning method that can effectively maintain trainability during sparsification.
TPP can compete with the ground-truth dynamical isometry recovery method on linear networks.
It delivers encouraging performance in comparison to many top-performing filter pruning methods.
arXiv Detail & Related papers (2022-07-25T21:15:47Z) - Power Flow Balancing with Decentralized Graph Neural Networks [4.812718493682454]
We propose an end-to-end framework based on a Graph Neural Network (GNN) to balance the power flows in a generic grid.
The proposed framework is efficient and, compared to other solvers based on deep learning, is robust to perturbations not only to the physical quantities on the grid components, but also to the topology.
arXiv Detail & Related papers (2021-11-03T12:14:56Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.