A Coupled CP Decomposition for Principal Components Analysis of
Symmetric Networks
- URL: http://arxiv.org/abs/2202.04719v1
- Date: Wed, 9 Feb 2022 20:52:19 GMT
- Title: A Coupled CP Decomposition for Principal Components Analysis of
Symmetric Networks
- Authors: Michael Weylandt and George Michailidis
- Abstract summary: We propose a principal components analysis (PCA) framework for sequence network data.
We derive efficient algorithms for computing our proposed "Coupled CP" decomposition.
We demonstrate the effectiveness of our proposal on simulated data and on examples from political science and financial economics.
- Score: 11.988825533369686
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In a number of application domains, one observes a sequence of network data;
for example, repeated measurements between users interactions in social media
platforms, financial correlation networks over time, or across subjects, as in
multi-subject studies of brain connectivity. One way to analyze such data is by
stacking networks into a third-order array or tensor. We propose a principal
components analysis (PCA) framework for sequence network data, based on a novel
decomposition for semi-symmetric tensors. We derive efficient algorithms for
computing our proposed "Coupled CP" decomposition and establish estimation
consistency of our approach under an analogue of the spiked covariance model
with rates the same as the matrix case up to a logarithmic term. Our framework
inherits many of the strengths of classical PCA and is suitable for a wide
range of unsupervised learning tasks, including identifying principal networks,
isolating meaningful changepoints or outliers across observations, and for
characterizing the "variability network" of the most varying edges. Finally, we
demonstrate the effectiveness of our proposal on simulated data and on examples
from political science and financial economics. The proof techniques used to
establish our main consistency results are surprisingly straight-forward and
may find use in a variety of other matrix and tensor decomposition problems.
Related papers
- Inferring Dynamic Networks from Marginals with Iterative Proportional Fitting [57.487936697747024]
A common network inference problem, arising from real-world data constraints, is how to infer a dynamic network from its time-aggregated adjacency matrix.
We introduce a principled algorithm that guarantees IPF converges under minimal changes to the network structure.
arXiv Detail & Related papers (2024-02-28T20:24:56Z) - Interacting Particle Systems on Networks: joint inference of the network
and the interaction kernel [8.535430501710712]
We infer the weight matrix of the network and systems which determine the rules of the interactions between agents.
We use two algorithms: one is on a new algorithm named operator regression with alternating least squares of data.
Both algorithms are scalable conditions guaranteeing identifiability and well-posedness.
arXiv Detail & Related papers (2024-02-13T12:29:38Z) - Fundamental limits of community detection from multi-view data:
multi-layer, dynamic and partially labeled block models [7.778975741303385]
We study community detection in multi-view data in modern network analysis.
We characterize the mutual information between the data and the latent parameters.
We introduce iterative algorithms based on Approximate Message Passing for community detection.
arXiv Detail & Related papers (2024-01-16T07:13:32Z) - Leveraging a Probabilistic PCA Model to Understand the Multivariate
Statistical Network Monitoring Framework for Network Security Anomaly
Detection [64.1680666036655]
We revisit anomaly detection techniques based on PCA from a probabilistic generative model point of view.
We have evaluated the mathematical model using two different datasets.
arXiv Detail & Related papers (2023-02-02T13:41:18Z) - Dense Hebbian neural networks: a replica symmetric picture of
unsupervised learning [4.133728123207142]
We consider dense, associative neural-networks trained with no supervision.
We investigate their computational capabilities analytically, via a statistical-mechanics approach, and numerically, via Monte Carlo simulations.
arXiv Detail & Related papers (2022-11-25T12:40:06Z) - coVariance Neural Networks [119.45320143101381]
Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning.
We propose a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs.
We show that VNN performance is indeed more stable than PCA-based statistical approaches.
arXiv Detail & Related papers (2022-05-31T15:04:43Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Tensor-Train Networks for Learning Predictive Modeling of
Multidimensional Data [0.0]
A promising strategy is based on tensor networks, which have been very successful in physical and chemical applications.
We show that the weights of a multidimensional regression model can be learned by means of tensor networks with the aim of performing a powerful compact representation.
An algorithm based on alternating least squares has been proposed for approximating the weights in TT-format with a reduction of computational power.
arXiv Detail & Related papers (2021-01-22T16:14:38Z) - A Multi-Semantic Metapath Model for Large Scale Heterogeneous Network
Representation Learning [52.83948119677194]
We propose a multi-semantic metapath (MSM) model for large scale heterogeneous representation learning.
Specifically, we generate multi-semantic metapath-based random walks to construct the heterogeneous neighborhood to handle the unbalanced distributions.
We conduct systematical evaluations for the proposed framework on two challenging datasets: Amazon and Alibaba.
arXiv Detail & Related papers (2020-07-19T22:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.