Network Comparison with Interpretable Contrastive Network Representation
Learning
- URL: http://arxiv.org/abs/2005.12419v2
- Date: Tue, 15 Feb 2022 16:13:54 GMT
- Title: Network Comparison with Interpretable Contrastive Network Representation
Learning
- Authors: Takanori Fujiwara, Jian Zhao, Francine Chen, Yaoliang Yu, Kwan-Liu Ma
- Abstract summary: We introduce a new analysis approach called contrastive network representation learning (cNRL)
cNRL enables embedding of network nodes into a low-dimensional representation that reveals the uniqueness of one network compared to another.
We demonstrate the effectiveness of i-cNRL for network comparison with multiple network models and real-world datasets.
- Score: 44.145644586950574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Identifying unique characteristics in a network through comparison with
another network is an essential network analysis task. For example, with
networks of protein interactions obtained from normal and cancer tissues, we
can discover unique types of interactions in cancer tissues. This analysis task
could be greatly assisted by contrastive learning, which is an emerging
analysis approach to discover salient patterns in one dataset relative to
another. However, existing contrastive learning methods cannot be directly
applied to networks as they are designed only for high-dimensional data
analysis. To address this problem, we introduce a new analysis approach called
contrastive network representation learning (cNRL). By integrating two machine
learning schemes, network representation learning and contrastive learning,
cNRL enables embedding of network nodes into a low-dimensional representation
that reveals the uniqueness of one network compared to another. Within this
approach, we also design a method, named i-cNRL, which offers interpretability
in the learned results, allowing for understanding which specific patterns are
only found in one network. We demonstrate the effectiveness of i-cNRL for
network comparison with multiple network models and real-world datasets.
Furthermore, we compare i-cNRL and other potential cNRL algorithm designs
through quantitative and qualitative evaluations.
Related papers
- Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Network Representation Learning: From Preprocessing, Feature Extraction
to Node Embedding [9.844802841686105]
Network representation learning (NRL) advances the conventional graph mining of social networks, knowledge graphs, and complex biomedical and physics information networks.
This survey paper reviews the design principles and the different node embedding techniques for network representation learning over homogeneous networks.
arXiv Detail & Related papers (2021-10-14T17:46:37Z) - Characterizing Learning Dynamics of Deep Neural Networks via Complex
Networks [1.0869257688521987]
Complex Network Theory (CNT) represents Deep Neural Networks (DNNs) as directed weighted graphs to study them as dynamical systems.
We introduce metrics for nodes/neurons and layers, namely Nodes Strength and Layers Fluctuation.
Our framework distills trends in the learning dynamics and separates low from high accurate networks.
arXiv Detail & Related papers (2021-10-06T10:03:32Z) - Interpretable Network Representation Learning with Principal Component
Analysis [1.2183405753834557]
We consider the problem of interpretable network representation learning for samples of network-valued data.
We propose the Principal Component Analysis for Networks (PCAN) algorithm to identify statistically meaningful low-dimensional representations of a network sample.
We introduce a fast sampling-based algorithm, sPCAN, which is significantly more computationally efficient than its counterpart, but still enjoys advantages of interpretability.
arXiv Detail & Related papers (2021-06-27T13:52:49Z) - What can linearized neural networks actually say about generalization? [67.83999394554621]
In certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization.
We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks.
Our work provides concrete examples of novel deep learning phenomena which can inspire future theoretical research.
arXiv Detail & Related papers (2021-06-12T13:05:11Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Learning low-rank latent mesoscale structures in networks [1.1470070927586016]
We present a new approach for describing low-rank mesoscale structures in networks.
We use several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks.
We show how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
arXiv Detail & Related papers (2021-02-13T18:54:49Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - A Visual Analytics Framework for Contrastive Network Analysis [29.5857145677982]
We design ContraNA, a visual analytics framework for discovering unique characteristics in networks.
ContraNA generates a low-dimensional embedding that reveals the uniqueness of one network when compared to another.
We demonstrate the usefulness of ContraNA with two case studies using real-world datasets.
arXiv Detail & Related papers (2020-08-01T02:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.