Statistical inference of assortative community structures
- URL: http://arxiv.org/abs/2006.14493v3
- Date: Mon, 29 Jun 2020 07:07:29 GMT
- Title: Statistical inference of assortative community structures
- Authors: Lizhi Zhang and Tiago P. Peixoto
- Abstract summary: We develop a methodology to infer assortative communities in networks based on a nonparametric Bayesian formulation of the planted partition model.
We show that our method is not subject to a resolution limit, and can uncover an arbitrarily large number of communities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We develop a principled methodology to infer assortative communities in
networks based on a nonparametric Bayesian formulation of the planted partition
model. We show that this approach succeeds in finding statistically significant
assortative modules in networks, unlike alternatives such as modularity
maximization, which systematically overfits both in artificial as well as in
empirical examples. In addition, we show that our method is not subject to a
resolution limit, and can uncover an arbitrarily large number of communities,
as long as there is statistical evidence for them. Our formulation is amenable
to model selection procedures, which allow us to compare it to more general
approaches based on the stochastic block model, and in this way reveal whether
assortativity is in fact the dominating large-scale mixing pattern. We perform
this comparison with several empirical networks, and identify numerous cases
where the network's assortativity is exaggerated by traditional community
detection methods, and we show how a more faithful degree of assortativity can
be identified.
Related papers
- Multivariate Stochastic Dominance via Optimal Transport and Applications to Models Benchmarking [21.23500484100963]
We introduce a statistic that assesses almost dominance under the framework of Optimal Transport with a smooth cost.
We also propose a hypothesis testing framework as well as an efficient implementation using the Sinkhorn algorithm.
We showcase our method in comparing and benchmarking Large Language Models that are evaluated on multiple metrics.
arXiv Detail & Related papers (2024-06-10T16:14:50Z) - A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization [7.378582040635655]
Current deep learning approaches rely on generative models that yield exact sample likelihoods.
This work introduces a method that lifts this restriction and opens the possibility to employ highly expressive latent variable models.
We experimentally validate our approach in data-free Combinatorial Optimization and demonstrate that our method achieves a new state-of-the-art on a wide range of benchmark problems.
arXiv Detail & Related papers (2024-06-03T17:55:02Z) - Revealing Multimodal Contrastive Representation Learning through Latent
Partial Causal Models [85.67870425656368]
We introduce a unified causal model specifically designed for multimodal data.
We show that multimodal contrastive representation learning excels at identifying latent coupled variables.
Experiments demonstrate the robustness of our findings, even when the assumptions are violated.
arXiv Detail & Related papers (2024-02-09T07:18:06Z) - Bayesian community detection for networks with covariates [16.230648949593153]
"Community detection" has arguably received the most attention in the scientific community.
We propose a block model with a co-dependent random partition prior.
Our model has the ability to learn the number of the communities via posterior inference without having to assume it to be known.
arXiv Detail & Related papers (2022-03-04T01:58:35Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Network Estimation by Mixing: Adaptivity and More [2.3478438171452014]
We propose a mixing strategy that leverages available arbitrary models to improve their individual performances.
The proposed method is computationally efficient and almost tuning-free.
We show that the proposed method performs equally well as the oracle estimate when the true model is included as individual candidates.
arXiv Detail & Related papers (2021-06-05T05:17:04Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Learning more expressive joint distributions in multimodal variational
methods [0.17188280334580194]
We introduce a method that improves the representational capacity of multimodal variational methods using normalizing flows.
We demonstrate that the model improves on state-of-the-art multimodal methods based on variational inference on various computer vision tasks.
We also show that learning more powerful approximate joint distributions improves the quality of the generated samples.
arXiv Detail & Related papers (2020-09-08T11:45:27Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.