Learning low-rank latent mesoscale structures in networks
- URL: http://arxiv.org/abs/2102.06984v5
- Date: Thu, 13 Jul 2023 05:42:06 GMT
- Title: Learning low-rank latent mesoscale structures in networks
- Authors: Hanbaek Lyu, Yacoub H. Kureh, Joshua Vendrow, Mason A. Porter
- Abstract summary: We present a new approach for describing low-rank mesoscale structures in networks.
We use several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks.
We show how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
- Score: 1.1470070927586016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is common to use networks to encode the architecture of interactions
between entities in complex systems in the physical, biological, social, and
information sciences. To study the large-scale behavior of complex systems, it
is useful to examine mesoscale structures in networks as building blocks that
influence such behavior. We present a new approach for describing low-rank
mesoscale structures in networks, and we illustrate our approach using several
synthetic network models and empirical friendship, collaboration, and
protein--protein interaction (PPI) networks. We find that these networks
possess a relatively small number of `latent motifs' that together can
successfully approximate most subgraphs of a network at a fixed mesoscale. We
use an algorithm for `network dictionary learning' (NDL), which combines a
network-sampling method and nonnegative matrix factorization, to learn the
latent motifs of a given network. The ability to encode a network using a set
of latent motifs has a wide variety of applications to network-analysis tasks,
such as comparison, denoising, and edge inference. Additionally, using a new
network denoising and reconstruction (NDR) algorithm, we demonstrate how to
denoise a corrupted network by using only the latent motifs that one learns
directly from the corrupted network.
Related papers
- Leveraging advances in machine learning for the robust classification and interpretation of networks [0.0]
Simulation approaches involve selecting a suitable network generative model such as Erd"os-R'enyi or small-world.
We utilize advances in interpretable machine learning to classify simulated networks by our generative models based on various network attributes.
arXiv Detail & Related papers (2024-03-20T00:24:23Z) - Symbolic Regression of Dynamic Network Models [0.0]
We introduce a novel formulation of a network generator and a parameter-free fitness function to evaluate the generated network.
We extend this approach by modifying generator semantics to create and retrieve rules for time-varying networks.
The framework was then used on three empirical datasets - subway networks of major cities, regions of street networks and semantic co-occurrence networks of literature in Artificial Intelligence.
arXiv Detail & Related papers (2023-12-15T00:34:45Z) - Unsupervised Graph Attention Autoencoder for Attributed Networks using
K-means Loss [0.0]
We introduce a simple, efficient, and clustering-oriented model based on unsupervised textbfGraph Attention textbfAutotextbfEncoder for community detection in attributed networks.
The proposed model adeptly learns representations from both the network's topology and attribute information, simultaneously addressing dual objectives: reconstruction and community discovery.
arXiv Detail & Related papers (2023-11-21T20:45:55Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Characterizing Learning Dynamics of Deep Neural Networks via Complex
Networks [1.0869257688521987]
Complex Network Theory (CNT) represents Deep Neural Networks (DNNs) as directed weighted graphs to study them as dynamical systems.
We introduce metrics for nodes/neurons and layers, namely Nodes Strength and Layers Fluctuation.
Our framework distills trends in the learning dynamics and separates low from high accurate networks.
arXiv Detail & Related papers (2021-10-06T10:03:32Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Detecting Communities in Heterogeneous Multi-Relational Networks:A
Message Passing based Approach [89.19237792558687]
Community is a common characteristic of networks including social networks, biological networks, computer and information networks.
We propose an efficient message passing based algorithm to simultaneously detect communities for all homogeneous networks.
arXiv Detail & Related papers (2020-04-06T17:36:24Z) - Modeling Dynamic Heterogeneous Network for Link Prediction using
Hierarchical Attention with Temporal RNN [16.362525151483084]
We propose a novel dynamic heterogeneous network embedding method, termed as DyHATR.
It uses hierarchical attention to learn heterogeneous information and incorporates recurrent neural networks with temporal attention to capture evolutionary patterns.
We benchmark our method on four real-world datasets for the task of link prediction.
arXiv Detail & Related papers (2020-04-01T17:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.