Systematic assessment of the quality of fit of the stochastic block
model for empirical networks
- URL: http://arxiv.org/abs/2201.01658v1
- Date: Wed, 5 Jan 2022 15:28:37 GMT
- Title: Systematic assessment of the quality of fit of the stochastic block
model for empirical networks
- Authors: Felipe Vaca-Ram\'irez, Tiago P. Peixoto
- Abstract summary: We analyze the quality of fit of the block model (SBM) for 275 empirical networks spanning a wide range of domains and orders of size magnitude.
We observe that the SBM is capable of providing an accurate description for the majority of networks considered, but falls short of saturating all modeling requirements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We perform a systematic analysis of the quality of fit of the stochastic
block model (SBM) for 275 empirical networks spanning a wide range of domains
and orders of size magnitude. We employ posterior predictive model checking as
a criterion to assess the quality of fit, which involves comparing networks
generated by the inferred model with the empirical network, according to a set
of network descriptors. We observe that the SBM is capable of providing an
accurate description for the majority of networks considered, but falls short
of saturating all modeling requirements. In particular, networks possessing a
large diameter and slow-mixing random walks tend to be badly described by the
SBM. However, contrary to what is often assumed, networks with a high abundance
of triangles can be well described by the SBM in many cases. We demonstrate
that simple network descriptors can be used to evaluate whether or not the SBM
can provide a sufficiently accurate representation, potentially pointing to
possible model extensions that can systematically improve the expressiveness of
this class of models.
Related papers
- Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Vertical Layering of Quantized Neural Networks for Heterogeneous
Inference [57.42762335081385]
We study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one.
We can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model.
arXiv Detail & Related papers (2022-12-10T15:57:38Z) - Controllability of Coarsely Measured Networked Linear Dynamical Systems
(Extended Version) [19.303541162361746]
We consider the controllability of large-scale linear networked dynamical systems when complete knowledge of network structure is unavailable.
We provide conditions under which average controllability of the fine-scale system can be well approximated by average controllability of the (synthesized, reduced-order) coarse-scale system.
arXiv Detail & Related papers (2022-06-21T17:50:09Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - The emergence of a concept in shallow neural networks [0.0]
We consider restricted Boltzmann machine (RBMs) trained over an unstructured dataset made of blurred copies of definite but unavailable archetypes''
We show that there exists a critical sample size beyond which the RBM can learn archetypes.
arXiv Detail & Related papers (2021-09-01T15:56:38Z) - Nondiagonal Mixture of Dirichlet Network Distributions for Analyzing a
Stock Ownership Network [3.0969191504482243]
Block modeling is widely used in studies on complex networks.
We provide an edge exchangeable block model that incorporates basic features and simultaneously infers the latent block structure of a given complex network.
Our model is a Bayesian nonparametric model that flexibly estimates the number of blocks and takes into account the possibility of unseen nodes.
arXiv Detail & Related papers (2020-09-08T05:56:10Z) - Community models for networks observed through edge nominations [6.442024233731203]
Communities are a common and widely studied structure in networks, typically under the assumption that the network is fully and correctly observed.
We propose a general model for a class of network sampling mechanisms based on recording edges via querying nodes.
We show community detection can be performed by spectral clustering under this general class of models.
arXiv Detail & Related papers (2020-08-09T04:53:13Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Tractably Modelling Dependence in Networks Beyond Exchangeability [0.0]
We study the estimation, clustering and degree behavior of the network in our setting.
This explores why and under which general conditions non-exchangeable network data can be described by a block model.
arXiv Detail & Related papers (2020-07-28T17:13:59Z) - Deep Autoencoding Topic Model with Scalable Hybrid Bayesian Inference [55.35176938713946]
We develop deep autoencoding topic model (DATM) that uses a hierarchy of gamma distributions to construct its multi-stochastic-layer generative network.
We propose a Weibull upward-downward variational encoder that deterministically propagates information upward via a deep neural network, followed by a downward generative model.
The efficacy and scalability of our models are demonstrated on both unsupervised and supervised learning tasks on big corpora.
arXiv Detail & Related papers (2020-06-15T22:22:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.