A Learning Convolutional Neural Network Approach for Network Robustness
Prediction
- URL: http://arxiv.org/abs/2203.10552v1
- Date: Sun, 20 Mar 2022 13:45:55 GMT
- Title: A Learning Convolutional Neural Network Approach for Network Robustness
Prediction
- Authors: Yang Lou and Ruizi Wu and Junli Li and Lin Wang and Xiang Li and
Guanrong Chen
- Abstract summary: Network robustness is critical for various societal and industrial networks again malicious attacks.
In this paper, an improved method for network robustness prediction is developed based on learning feature representation using convolutional neural network (LFR-CNN)
In this scheme, higher-dimensional network data are compressed to lower-dimensional representations, and then passed to a CNN to perform robustness prediction.
- Score: 13.742495880357493
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Network robustness is critical for various societal and industrial networks
again malicious attacks. In particular, connectivity robustness and
controllability robustness reflect how well a networked system can maintain its
connectedness and controllability against destructive attacks, which can be
quantified by a sequence of values that record the remaining connectivity and
controllability of the network after a sequence of node- or edge-removal
attacks. Traditionally, robustness is determined by attack simulations, which
are computationally very time-consuming or even practically infeasible. In this
paper, an improved method for network robustness prediction is developed based
on learning feature representation using convolutional neural network
(LFR-CNN). In this scheme, higher-dimensional network data are compressed to
lower-dimensional representations, and then passed to a CNN to perform
robustness prediction. Extensive experimental studies on both synthetic and
real-world networks, both directed and undirected, demonstrate that 1) the
proposed LFR-CNN performs better than other two state-of-the-art prediction
methods, with significantly lower prediction errors; 2) LFR-CNN is insensitive
to the variation of the network size, which significantly extends its
applicability; 3) although LFR-CNN needs more time to perform feature learning,
it can achieve accurate prediction faster than attack simulations; 4) LFR-CNN
not only can accurately predict network robustness, but also provides a good
indicator for connectivity robustness, better than the classical spectral
measures.
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Comprehensive Analysis of Network Robustness Evaluation Based on Convolutional Neural Networks with Spatial Pyramid Pooling [4.366824280429597]
Connectivity robustness, a crucial aspect for understanding, optimizing, and repairing complex networks, has traditionally been evaluated through simulations.
We address these challenges by designing a convolutional neural networks (CNN) model with spatial pyramid pooling networks (SPP-net)
We show that the proposed CNN model consistently achieves accurate evaluations of both attack curves and robustness values across all removal scenarios.
arXiv Detail & Related papers (2023-08-10T09:54:22Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - SPP-CNN: An Efficient Framework for Network Robustness Prediction [13.742495880357493]
This paper develops an efficient framework for network robustness prediction, the spatial pyramid pooling convolutional neural network (SPP-CNN)
The new framework installs a spatial pyramid pooling layer between the convolutional and fully-connected layers, overcoming the common mismatch issue in the CNN-based prediction approaches.
arXiv Detail & Related papers (2023-05-13T09:09:20Z) - Certified Invertibility in Neural Networks via Mixed-Integer Programming [16.64960701212292]
Neural networks are known to be vulnerable to adversarial attacks.
There may exist large, meaningful perturbations that do not affect the network's decision.
We discuss how our findings can be useful for invertibility certification in transformations between neural networks.
arXiv Detail & Related papers (2023-01-27T15:40:38Z) - CNN-based Prediction of Network Robustness With Missing Edges [0.9239657838690227]
We investigate the performance of CNN-based approaches for connectivity and controllability prediction, when partial network information is missing.
A threshold is explored that if a total amount of more than 7.29% information is lost, the performance of CNN-based prediction will be significantly degenerated.
arXiv Detail & Related papers (2022-08-25T03:36:20Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Adversarial Refinement Network for Human Motion Prediction [61.50462663314644]
Two popular methods, recurrent neural networks and feed-forward deep networks, are able to predict rough motion trend.
We propose an Adversarial Refinement Network (ARNet) following a simple yet effective coarse-to-fine mechanism with novel adversarial error augmentation.
arXiv Detail & Related papers (2020-11-23T05:42:20Z) - Link Prediction for Temporally Consistent Networks [6.981204218036187]
Link prediction estimates the next relationship in dynamic networks.
The use of adjacency matrix to represent dynamically evolving networks limits the ability to analytically learn from heterogeneous, sparse, or forming networks.
We propose a new method of canonically representing heterogeneous time-evolving activities as a temporally parameterized network model.
arXiv Detail & Related papers (2020-06-06T07:28:03Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.