A Visual Analytics Framework for Contrastive Network Analysis
- URL: http://arxiv.org/abs/2008.00151v2
- Date: Mon, 17 Aug 2020 01:46:51 GMT
- Title: A Visual Analytics Framework for Contrastive Network Analysis
- Authors: Takanori Fujiwara, Jian Zhao, Francine Chen, Kwan-Liu Ma
- Abstract summary: We design ContraNA, a visual analytics framework for discovering unique characteristics in networks.
ContraNA generates a low-dimensional embedding that reveals the uniqueness of one network when compared to another.
We demonstrate the usefulness of ContraNA with two case studies using real-world datasets.
- Score: 29.5857145677982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common network analysis task is comparison of two networks to identify
unique characteristics in one network with respect to the other. For example,
when comparing protein interaction networks derived from normal and cancer
tissues, one essential task is to discover protein-protein interactions unique
to cancer tissues. However, this task is challenging when the networks contain
complex structural (and semantic) relations. To address this problem, we design
ContraNA, a visual analytics framework leveraging both the power of machine
learning for uncovering unique characteristics in networks and also the
effectiveness of visualization for understanding such uniqueness. The basis of
ContraNA is cNRL, which integrates two machine learning schemes, network
representation learning (NRL) and contrastive learning (CL), to generate a
low-dimensional embedding that reveals the uniqueness of one network when
compared to another. ContraNA provides an interactive visualization interface
to help analyze the uniqueness by relating embedding results and network
structures as well as explaining the learned features by cNRL. We demonstrate
the usefulness of ContraNA with two case studies using real-world datasets. We
also evaluate through a controlled user study with 12 participants on network
comparison tasks. The results show that participants were able to both
effectively identify unique characteristics from complex networks and interpret
the results obtained from cNRL.
Related papers
- Object-based Probabilistic Similarity Evidence of Sparse Latent Features
from Fully Convolutional Networks [0.0]
Similarity analysis using neural networks has emerged as a powerful technique for understanding and categorizing complex patterns in various domains.
This research explores the utilization of latent information generated by fully convolutional networks (FCNs) in similarity analysis.
arXiv Detail & Related papers (2023-07-25T16:15:29Z) - Visual Analytics of Multivariate Networks with Representation Learning and Composite Variable Construction [19.265502727154473]
This paper presents a visual analytics workflow for studying multivariate networks.
It consists of a neural-network-based learning phase to classify the data, a dimensionality reduction and optimization phase, and an interpreting phase conducted by the user.
A key part of our design is a composite variable construction step that remodels nonlinear features obtained by neural networks into linear features that are intuitive to interpret.
arXiv Detail & Related papers (2023-03-16T18:31:18Z) - Privacy-Preserving Representation Learning for Text-Attributed Networks
with Simplicial Complexes [24.82096971322501]
I will study learning network representations with text attributes for simplicial complexes (RT4SC) via simplicial neural networks (SNNs)
I will conduct research on two potential attacks on the representation outputs from SNNs.
I will study a privacy-preserving deterministic differentially private alternating direction method of multiplier to learn secure representation outputs from SNNs.
arXiv Detail & Related papers (2023-02-09T00:32:06Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Learning distinct features helps, provably [98.78384185493624]
We study the diversity of the features learned by a two-layer neural network trained with the least squares loss.
We measure the diversity by the average $L$-distance between the hidden-layer features.
arXiv Detail & Related papers (2021-06-10T19:14:45Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Learning low-rank latent mesoscale structures in networks [1.1470070927586016]
We present a new approach for describing low-rank mesoscale structures in networks.
We use several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks.
We show how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
arXiv Detail & Related papers (2021-02-13T18:54:49Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Network Comparison with Interpretable Contrastive Network Representation
Learning [44.145644586950574]
We introduce a new analysis approach called contrastive network representation learning (cNRL)
cNRL enables embedding of network nodes into a low-dimensional representation that reveals the uniqueness of one network compared to another.
We demonstrate the effectiveness of i-cNRL for network comparison with multiple network models and real-world datasets.
arXiv Detail & Related papers (2020-05-25T21:46:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.