Informative core identification in complex networks
- URL: http://arxiv.org/abs/2101.06388v1
- Date: Sat, 16 Jan 2021 07:19:21 GMT
- Title: Informative core identification in complex networks
- Authors: Ruizhong Miao and Tianxi Li
- Abstract summary: In network analysis, the core structure of modeling interest is usually hidden in a larger network in which most structures are not informative.
This paper introduces a novel core-periphery model for the non-informative periphery structure of networks without imposing a specific form for the informative core structure.
- Score: 2.3478438171452014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In network analysis, the core structure of modeling interest is usually
hidden in a larger network in which most structures are not informative. The
noise and bias introduced by the non-informative component in networks can
obscure the salient structure and limit many network modeling procedures'
effectiveness. This paper introduces a novel core-periphery model for the
non-informative periphery structure of networks without imposing a specific
form for the informative core structure. We propose spectral algorithms for
core identification as a data preprocessing step for general downstream network
analysis tasks based on the model. The algorithm enjoys a strong theoretical
guarantee of accuracy and is scalable for large networks. We evaluate the
proposed method by extensive simulation studies demonstrating various
advantages over many traditional core-periphery methods. The method is applied
to extract the informative core structure from a citation network and give more
informative results in the downstream hierarchical community detection.
Related papers
- Uncovering the hidden core-periphery structure in hyperbolic networks [0.0]
hyperbolic network models exhibit fundamental and essential features, like small-worldness, scale-freeness, high-clustering coefficient, and community structure.
In this paper, we explore the presence of an important feature, the core-periphery structure, in the hyperbolic network models.
arXiv Detail & Related papers (2024-06-28T14:39:21Z) - Unsupervised Graph Attention Autoencoder for Attributed Networks using
K-means Loss [0.0]
We introduce a simple, efficient, and clustering-oriented model based on unsupervised textbfGraph Attention textbfAutotextbfEncoder for community detection in attributed networks.
The proposed model adeptly learns representations from both the network's topology and attribute information, simultaneously addressing dual objectives: reconstruction and community discovery.
arXiv Detail & Related papers (2023-11-21T20:45:55Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - The Principles of Deep Learning Theory [19.33681537640272]
This book develops an effective theory approach to understanding deep neural networks of practical relevance.
We explain how these effectively-deep networks learn nontrivial representations from training.
We show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks.
arXiv Detail & Related papers (2021-06-18T15:00:00Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Learning low-rank latent mesoscale structures in networks [1.1470070927586016]
We present a new approach for describing low-rank mesoscale structures in networks.
We use several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks.
We show how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
arXiv Detail & Related papers (2021-02-13T18:54:49Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.