Interpretable Network Representation Learning with Principal Component
Analysis
- URL: http://arxiv.org/abs/2106.14238v1
- Date: Sun, 27 Jun 2021 13:52:49 GMT
- Title: Interpretable Network Representation Learning with Principal Component
Analysis
- Authors: James D. Wilson, Jihui Lee
- Abstract summary: We consider the problem of interpretable network representation learning for samples of network-valued data.
We propose the Principal Component Analysis for Networks (PCAN) algorithm to identify statistically meaningful low-dimensional representations of a network sample.
We introduce a fast sampling-based algorithm, sPCAN, which is significantly more computationally efficient than its counterpart, but still enjoys advantages of interpretability.
- Score: 1.2183405753834557
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of interpretable network representation learning for
samples of network-valued data. We propose the Principal Component Analysis for
Networks (PCAN) algorithm to identify statistically meaningful low-dimensional
representations of a network sample via subgraph count statistics. The PCAN
procedure provides an interpretable framework for which one can readily
visualize, explore, and formulate predictive models for network samples. We
furthermore introduce a fast sampling-based algorithm, sPCAN, which is
significantly more computationally efficient than its counterpart, but still
enjoys advantages of interpretability. We investigate the relationship between
these two methods and analyze their large-sample properties under the common
regime where the sample of networks is a collection of kernel-based random
graphs. We show that under this regime, the embeddings of the sPCAN method
enjoy a central limit theorem and moreover that the population level embeddings
of PCAN and sPCAN are equivalent. We assess PCAN's ability to visualize,
cluster, and classify observations in network samples arising in nature,
including functional connectivity network samples and dynamic networks
describing the political co-voting habits of the U.S. Senate. Our analyses
reveal that our proposed algorithm provides informative and discriminatory
features describing the networks in each sample. The PCAN and sPCAN methods
build on the current literature of network representation learning and set the
stage for a new line of research in interpretable learning on network-valued
data. Publicly available software for the PCAN and sPCAN methods are available
at https://www.github.com/jihuilee/.
Related papers
- Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - k* Distribution: Evaluating the Latent Space of Deep Neural Networks
using Local Neighborhood Analysis [8.701566919381225]
dimensionality reduction techniques such as t-SNE or UMAP tend to distort the structure of sample distributions within specific classes in the subset of the latent space.
We introduce the k* Distribution methodology, which focuses on capturing the characteristics and structure of sample distributions for individual classes.
Our study reveals three distinct distributions of samples within the learned latent space subset: a) Fractured, b) Overlapped, and c) Clustered.
arXiv Detail & Related papers (2023-12-07T03:42:48Z) - Dense Hebbian neural networks: a replica symmetric picture of supervised
learning [4.133728123207142]
We consider dense, associative neural-networks trained by a teacher with supervision.
We investigate their computational capabilities analytically, via statistical-mechanics of spin glasses, and numerically, via Monte Carlo simulations.
arXiv Detail & Related papers (2022-11-25T13:37:47Z) - Decomposing neural networks as mappings of correlation functions [57.52754806616669]
We study the mapping between probability distributions implemented by a deep feed-forward network.
We identify essential statistics in the data, as well as different information representations that can be used by neural networks.
arXiv Detail & Related papers (2022-02-10T09:30:31Z) - ADMM-DAD net: a deep unfolding network for analysis compressed sensing [20.88999913266683]
We propose a new deep unfolding neural network based on the ADMM algorithm for analysis Compressed Sensing.
The proposed network jointly learns a redundant analysis operator for sparsification and reconstructs the signal of interest.
arXiv Detail & Related papers (2021-10-13T18:56:59Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Deep Archimedean Copulas [98.96141706464425]
ACNet is a novel differentiable neural network architecture that enforces structural properties.
We show that ACNet is able to both approximate common Archimedean Copulas and generate new copulas which may provide better fits to data.
arXiv Detail & Related papers (2020-12-05T22:58:37Z) - SNoRe: Scalable Unsupervised Learning of Symbolic Node Representations [0.0]
The proposed SNoRe algorithm is capable of learning symbolic, human-understandable representations of individual network nodes.
SNoRe's interpretable features are suitable for direct explanation of individual predictions.
The vectorized implementation of SNoRe scales to large networks, making it suitable for contemporary network learning and analysis tasks.
arXiv Detail & Related papers (2020-09-08T08:13:21Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Network Comparison with Interpretable Contrastive Network Representation
Learning [44.145644586950574]
We introduce a new analysis approach called contrastive network representation learning (cNRL)
cNRL enables embedding of network nodes into a low-dimensional representation that reveals the uniqueness of one network compared to another.
We demonstrate the effectiveness of i-cNRL for network comparison with multiple network models and real-world datasets.
arXiv Detail & Related papers (2020-05-25T21:46:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.