Robustness modularity in complex networks
- URL: http://arxiv.org/abs/2110.02297v1
- Date: Tue, 5 Oct 2021 19:00:45 GMT
- Title: Robustness modularity in complex networks
- Authors: Filipi N. Silva and Aiiad Albeshri and Vijey Thayananthan and Wadee
Alhalabi and Santo Fortunato
- Abstract summary: We propose a new measure based on the concept of robustness.
robustness modularity is the probability to find trivial partitions when the structure of the network is randomly perturbed.
Tests on artificial and real graphs reveal that robustness modularity can be used to assess and compare the strength of the community structure of different networks.
- Score: 1.749935196721634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A basic question in network community detection is how modular a given
network is. This is usually addressed by evaluating the quality of partitions
detected in the network. The Girvan-Newman (GN) modularity function is the
standard way to make this assessment, but it has a number of drawbacks. Most
importantly, it is not clearly interpretable, given that the measure can take
relatively large values on partitions of random networks without communities.
Here we propose a new measure based on the concept of robustness: modularity is
the probability to find trivial partitions when the structure of the network is
randomly perturbed. This concept can be implemented for any clustering
algorithm capable of telling when a group structure is absent. Tests on
artificial and real graphs reveal that robustness modularity can be used to
assess and compare the strength of the community structure of different
networks. We also introduce two other quality functions: modularity difference,
a suitably normalized version of the GN modularity; information modularity, a
measure of distance based on information compression. Both measures are
strongly correlated with robustness modularity, and are promising options as
well.
Related papers
- Breaking Neural Network Scaling Laws with Modularity [8.482423139660153]
We show how the amount of training data required to generalize varies with the intrinsic dimensionality of a task's input.
We then develop a novel learning rule for modular networks to exploit this advantage.
arXiv Detail & Related papers (2024-09-09T16:43:09Z) - Modularity in Deep Learning: A Survey [0.0]
We review the notion of modularity in deep learning around three axes: data, task, and model.
Data modularity refers to the observation or creation of data groups for various purposes.
Task modularity refers to the decomposition of tasks into sub-tasks.
Model modularity means that the architecture of a neural network system can be decomposed into identifiable modules.
arXiv Detail & Related papers (2023-10-02T12:41:34Z) - Independent Modular Networks [3.10678167047537]
Monolithic neural networks dismiss the compositional nature of data generation processes.
We propose a modular network architecture that splits the modules into roles.
We also provide regularizations that improve the resiliency of the modular network to the problem of module collapse.
arXiv Detail & Related papers (2023-06-02T07:29:36Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Is a Modular Architecture Enough? [80.32451720642209]
We provide a thorough assessment of common modular architectures, through the lens of simple and known modular data distributions.
We highlight the benefits of modularity and sparsity and reveal insights on the challenges faced while optimizing modular systems.
arXiv Detail & Related papers (2022-06-06T16:12:06Z) - Clustering units in neural networks: upstream vs downstream information [3.222802562733787]
We study modularity of hidden layer representations of feedforward, fully connected networks.
We find two surprising results: first, dropout dramatically increased modularity, while other forms of weight regularization had more modest effects.
This has important implications for representation-learning, as it suggests that finding modular representations that reflect structure in inputs may be a distinct goal from learning modular representations that reflect structure in outputs.
arXiv Detail & Related papers (2022-03-22T15:35:10Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Text Modular Networks: Learning to Decompose Tasks in the Language of
Existing Models [61.480085460269514]
We propose a framework for building interpretable systems that learn to solve complex tasks by decomposing them into simpler ones solvable by existing models.
We use this framework to build ModularQA, a system that can answer multi-hop reasoning questions by decomposing them into sub-questions answerable by a neural factoid single-span QA model and a symbolic calculator.
arXiv Detail & Related papers (2020-09-01T23:45:42Z) - RE-MIMO: Recurrent and Permutation Equivariant Neural MIMO Detection [85.44877328116881]
We present a novel neural network for symbol detection in wireless communication systems.
It is motivated by several important considerations in wireless communication systems.
We compare its performance against existing methods and the results show the ability of our network to efficiently handle a variable number of transmitters.
arXiv Detail & Related papers (2020-06-30T22:43:01Z) - Obtaining Faithful Interpretations from Compositional Neural Networks [72.41100663462191]
We evaluate the intermediate outputs of NMNs on NLVR2 and DROP datasets.
We find that the intermediate outputs differ from the expected output, illustrating that the network structure does not provide a faithful explanation of model behaviour.
arXiv Detail & Related papers (2020-05-02T06:50:35Z) - Pruned Neural Networks are Surprisingly Modular [9.184659875364689]
We introduce a measurable notion of modularity for multi-layer perceptrons.
We investigate the modular structure of neural networks trained on datasets of small images.
arXiv Detail & Related papers (2020-03-10T17:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.