A Graph Transformer-Driven Approach for Network Robustness Learning
- URL: http://arxiv.org/abs/2306.06913v2
- Date: Mon, 15 Apr 2024 08:54:09 GMT
- Title: A Graph Transformer-Driven Approach for Network Robustness Learning
- Authors: Yu Zhang, Jia Li, Jie Ding, Xiang Li,
- Abstract summary: This paper proposes a novel versatile and unified robustness learning approach via graph transformer (NRL-GT)
NRL-GT accomplishes the task of controllability robustness learning and connectivity robustness learning from multiple aspects.
It is also able to deal with complex networks of different size with low learning error and high efficiency.
- Score: 27.837847091520842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning and analysis of network robustness, including controllability robustness and connectivity robustness, is critical for various networked systems against attacks. Traditionally, network robustness is determined by attack simulations, which is very time-consuming and even incapable for large-scale networks. Network Robustness Learning, which is dedicated to learning network robustness with high precision and high speed, provides a powerful tool to analyze network robustness by replacing simulations. In this paper, a novel versatile and unified robustness learning approach via graph transformer (NRL-GT) is proposed, which accomplishes the task of controllability robustness learning and connectivity robustness learning from multiple aspects including robustness curve learning, overall robustness learning, and synthetic network classification. Numerous experiments show that: 1) NRL-GT is a unified learning framework for controllability robustness and connectivity robustness, demonstrating a strong generalization ability to ensure high precision when training and test sets are distributed differently; 2) Compared to the cutting-edge methods, NRL-GT can simultaneously perform network robustness learning from multiple aspects and obtains superior results in less time. NRL-GT is also able to deal with complex networks of different size with low learning error and high efficiency; 3) It is worth mentioning that the backbone of NRL-GT can serve as a transferable feature learning module for complex networks of different size and different downstream tasks.
Related papers
- Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Continual Learning, Fast and Slow [75.53144246169346]
According to the Complementary Learning Systems theory, humans do effective emphcontinual learning through two complementary systems.
We propose emphDualNets (for Dual Networks), a general continual learning framework comprising a fast learning system for supervised learning of specific tasks and a slow learning system for representation learning of task-agnostic general representation via Self-Supervised Learning (SSL)
We demonstrate the promising results of DualNets on a wide range of continual learning protocols, ranging from the standard offline, task-aware setting to the challenging online, task-free scenario.
arXiv Detail & Related papers (2022-09-06T10:48:45Z) - A Learning Convolutional Neural Network Approach for Network Robustness
Prediction [13.742495880357493]
Network robustness is critical for various societal and industrial networks again malicious attacks.
In this paper, an improved method for network robustness prediction is developed based on learning feature representation using convolutional neural network (LFR-CNN)
In this scheme, higher-dimensional network data are compressed to lower-dimensional representations, and then passed to a CNN to perform robustness prediction.
arXiv Detail & Related papers (2022-03-20T13:45:55Z) - Learning Fast and Slow for Online Time Series Forecasting [76.50127663309604]
Fast and Slow learning Networks (FSNet) is a holistic framework for online time-series forecasting.
FSNet balances fast adaptation to recent changes and retrieving similar old knowledge.
Our code will be made publicly available.
arXiv Detail & Related papers (2022-02-23T18:23:07Z) - Revisiting the double-well problem by deep learning with a hybrid
network [7.308730248177914]
We propose a novel hybrid network which integrates two different kinds of neural networks: LSTM and ResNet.
Such a hybrid network can be applied for solving cooperative dynamics in a system with fast spatial or temporal modulations.
arXiv Detail & Related papers (2021-04-25T07:51:43Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Network Comparison with Interpretable Contrastive Network Representation
Learning [44.145644586950574]
We introduce a new analysis approach called contrastive network representation learning (cNRL)
cNRL enables embedding of network nodes into a low-dimensional representation that reveals the uniqueness of one network compared to another.
We demonstrate the effectiveness of i-cNRL for network comparison with multiple network models and real-world datasets.
arXiv Detail & Related papers (2020-05-25T21:46:59Z) - R-FORCE: Robust Learning for Random Recurrent Neural Networks [6.285241353736006]
We propose a robust training method to enhance robustness of RRNN.
FORCE learning approach was shown to be applicable even for the challenging task of target-learning.
Our experiments indicate that R-FORCE facilitates significantly more stable and accurate target-learning for a wide class of RRNN.
arXiv Detail & Related papers (2020-03-25T22:08:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.