Structure Amplification on Multi-layer Stochastic Block Models
- URL: http://arxiv.org/abs/2108.00127v1
- Date: Sat, 31 Jul 2021 02:11:47 GMT
- Title: Structure Amplification on Multi-layer Stochastic Block Models
- Authors: Xiaodong Xin, Kun He, Jialu Bao, Bart Selman, John E. Hopcroft
- Abstract summary: We propose a general structure amplification technique that uncovers hidden structure in complex networks.
HiCODE incrementally weakens dominant structure through randomization allowing the hidden functionality to emerge.
We provide theoretical proof that the iterative reducing methods could help promote the uncovering of hidden structure.
- Score: 16.53851254884497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much of the complexity of social, biological, and engineered systems arises
from a network of complex interactions connecting many basic components.
Network analysis tools have been successful at uncovering latent structure
termed communities in such networks. However, some of the most interesting
structure can be difficult to uncover because it is obscured by the more
dominant structure. Our previous work proposes a general structure
amplification technique called HICODE that uncovers many layers of functional
hidden structure in complex networks. HICODE incrementally weakens dominant
structure through randomization allowing the hidden functionality to emerge,
and uncovers these hidden structure in real-world networks that previous
methods rarely uncover. In this work, we conduct a comprehensive and systematic
theoretical analysis on the hidden community structure. In what follows, we
define multi-layer stochastic block model, and provide theoretical support
using the model on why the existence of hidden structure will make the
detection of dominant structure harder compared with equivalent random noise.
We then provide theoretical proofs that the iterative reducing methods could
help promote the uncovering of hidden structure as well as boosting the
detection quality of dominant structure.
Related papers
- Uncovering the hidden core-periphery structure in hyperbolic networks [0.0]
hyperbolic network models exhibit fundamental and essential features, like small-worldness, scale-freeness, high-clustering coefficient, and community structure.
In this paper, we explore the presence of an important feature, the core-periphery structure, in the hyperbolic network models.
arXiv Detail & Related papers (2024-06-28T14:39:21Z) - Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - Defining Neural Network Architecture through Polytope Structures of Dataset [53.512432492636236]
This paper defines upper and lower bounds for neural network widths, which are informed by the polytope structure of the dataset in question.
We develop an algorithm to investigate a converse situation where the polytope structure of a dataset can be inferred from its corresponding trained neural networks.
It is established that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be efficiently encapsulated using no more than two polytopes with a small number of faces.
arXiv Detail & Related papers (2024-02-04T08:57:42Z) - Polynomially Over-Parameterized Convolutional Neural Networks Contain
Structured Strong Winning Lottery Tickets [4.020829863982153]
We prove the existence of structured Neuralworks that can approximate any sufficiently smaller network.
This result provides the first sub-exponential bound around the Strong Lottery Ticket Hypothesis.
arXiv Detail & Related papers (2023-11-16T12:38:45Z) - Redefining Neural Architecture Search of Heterogeneous Multi-Network
Models by Characterizing Variation Operators and Model Components [71.03032589756434]
We investigate the effect of different variation operators in a complex domain, that of multi-network heterogeneous neural models.
We characterize both the variation operators, according to their effect on the complexity and performance of the model; and the models, relying on diverse metrics which estimate the quality of the different parts composing it.
arXiv Detail & Related papers (2021-06-16T17:12:26Z) - Hierarchical community structure in networks [1.9981375888949475]
We present a theoretical study on hierarchical community structure in networks.
We address the following questions: 1) How should we define a hierarchy of communities? 2) How do we determine if there is sufficient evidence of a hierarchical structure in a network?
arXiv Detail & Related papers (2020-09-15T16:18:26Z) - On the use of local structural properties for improving the efficiency
of hierarchical community detection methods [77.34726150561087]
We study how local structural network properties can be used as proxies to improve the efficiency of hierarchical community detection.
We also check the performance impact of network prunings as an ancillary tactic to make hierarchical community detection more efficient.
arXiv Detail & Related papers (2020-09-15T00:16:12Z) - Automated Search for Resource-Efficient Branched Multi-Task Networks [81.48051635183916]
We propose a principled approach, rooted in differentiable neural architecture search, to automatically define branching structures in a multi-task neural network.
We show that our approach consistently finds high-performing branching structures within limited resource budgets.
arXiv Detail & Related papers (2020-08-24T09:49:19Z) - Emergent entanglement structures and self-similarity in quantum spin
chains [0.0]
We introduce an experimentally accessible network representation for many-body quantum states based on entanglement between all pairs of its constituents.
We illustrate the power of this representation by applying it to a paradigmatic spin chain model, the XX model, and showing that it brings to light new phenomena.
arXiv Detail & Related papers (2020-07-14T12:13:29Z) - Understanding Deep Architectures with Reasoning Layer [60.90906477693774]
We show that properties of the algorithm layers, such as convergence, stability, and sensitivity, are intimately related to the approximation and generalization abilities of the end-to-end model.
Our theory can provide useful guidelines for designing deep architectures with reasoning layers.
arXiv Detail & Related papers (2020-06-24T00:26:35Z) - Detecting structural perturbations from time series with deep learning [0.0]
We present a graph neural network approach to infer structural perturbations from functional time series.
We show our data-driven approach outperforms typical reconstruction methods.
This work uncovers a practical avenue to study the resilience of real-world complex systems.
arXiv Detail & Related papers (2020-06-09T13:08:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.