The impossibility of low rank representations for triangle-rich complex
networks
- URL: http://arxiv.org/abs/2003.12635v1
- Date: Fri, 27 Mar 2020 20:57:56 GMT
- Title: The impossibility of low rank representations for triangle-rich complex
networks
- Authors: C. Seshadhri and Aneesh Sharma and Andrew Stolman and Ashish Goel
- Abstract summary: We argue that such graph embeddings do notcapture salient properties of complex networks.
We mathematically prove that any embedding that can successfully create these two properties must have rank nearly linear in the number of vertices.
Among other implications, this establishes that popular embedding techniques such as Singular Value Decomposition and node2vec fail to capture significant structural aspects of real-world complex networks.
- Score: 9.550745725703292
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The study of complex networks is a significant development in modern science,
and has enriched the social sciences, biology, physics, and computer science.
Models and algorithms for such networks are pervasive in our society, and
impact human behavior via social networks, search engines, and recommender
systems to name a few. A widely used algorithmic technique for modeling such
complex networks is to construct a low-dimensional Euclidean embedding of the
vertices of the network, where proximity of vertices is interpreted as the
likelihood of an edge. Contrary to the common view, we argue that such graph
embeddings do not}capture salient properties of complex networks. The two
properties we focus on are low degree and large clustering coefficients, which
have been widely established to be empirically true for real-world networks. We
mathematically prove that any embedding (that uses dot products to measure
similarity) that can successfully create these two properties must have rank
nearly linear in the number of vertices. Among other implications, this
establishes that popular embedding techniques such as Singular Value
Decomposition and node2vec fail to capture significant structural aspects of
real-world complex networks. Furthermore, we empirically study a number of
different embedding techniques based on dot product, and show that they all
fail to capture the triangle structure.
Related papers
- Sifting out communities in large sparse networks [2.666294200266662]
We introduce an intuitive objective function for quantifying the quality of clustering results in large sparse networks.
We utilize a two-step method for identifying communities which is especially well-suited for this domain.
We identify complex genetic interactions in large-scale networks comprised of tens of thousands of nodes.
arXiv Detail & Related papers (2024-05-01T18:57:41Z) - Defining Neural Network Architecture through Polytope Structures of Dataset [53.512432492636236]
This paper defines upper and lower bounds for neural network widths, which are informed by the polytope structure of the dataset in question.
We develop an algorithm to investigate a converse situation where the polytope structure of a dataset can be inferred from its corresponding trained neural networks.
It is established that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be efficiently encapsulated using no more than two polytopes with a small number of faces.
arXiv Detail & Related papers (2024-02-04T08:57:42Z) - Unsupervised Graph Attention Autoencoder for Attributed Networks using
K-means Loss [0.0]
We introduce a simple, efficient, and clustering-oriented model based on unsupervised textbfGraph Attention textbfAutotextbfEncoder for community detection in attributed networks.
The proposed model adeptly learns representations from both the network's topology and attribute information, simultaneously addressing dual objectives: reconstruction and community discovery.
arXiv Detail & Related papers (2023-11-21T20:45:55Z) - HyperS2V: A Framework for Structural Representation of Nodes in Hyper
Networks [8.391883728680439]
Hyper networks possess the ability to depict more complex relationships among nodes and store extensive information.
This research introduces HyperS2V, a node embedding approach that centers on the structural similarity within hyper networks.
arXiv Detail & Related papers (2023-11-07T17:26:31Z) - Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - Data Topology-Dependent Upper Bounds of Neural Network Widths [52.58441144171022]
We first show that a three-layer neural network can be designed to approximate an indicator function over a compact set.
This is then extended to a simplicial complex, deriving width upper bounds based on its topological structure.
We prove the universal approximation property of three-layer ReLU networks using our topological approach.
arXiv Detail & Related papers (2023-05-25T14:17:15Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - Low-Rank Representations Towards Classification Problem of Complex
Networks [0.0]
Complex networks representing social interactions, brain activities, molecular structures have been studied widely to be able to understand and predict their characteristics as graphs.
Models and algorithms for these networks are used in real-life applications, such as search engines, and recommender systems.
We study the performance of such low-rank representations of real-life networks on a network classification problem.
arXiv Detail & Related papers (2022-10-20T19:56:18Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Node Embeddings and Exact Low-Rank Representations of Complex Networks [30.869784223109832]
Recent work by Seshadhri et al. suggests that such embeddings cannot capture local structure arising in complex networks.
We show that the results of Seshadhri et al. are intimately connected to the model they use rather than the low-dimensional structure of complex networks.
arXiv Detail & Related papers (2020-06-10T01:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.