Low-Rank Representations Towards Classification Problem of Complex
Networks
- URL: http://arxiv.org/abs/2210.11561v1
- Date: Thu, 20 Oct 2022 19:56:18 GMT
- Title: Low-Rank Representations Towards Classification Problem of Complex
Networks
- Authors: Murat \c{C}elik, Ali Baran Ta\c{s}demir, Lale \"Ozkahya
- Abstract summary: Complex networks representing social interactions, brain activities, molecular structures have been studied widely to be able to understand and predict their characteristics as graphs.
Models and algorithms for these networks are used in real-life applications, such as search engines, and recommender systems.
We study the performance of such low-rank representations of real-life networks on a network classification problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complex networks representing social interactions, brain activities,
molecular structures have been studied widely to be able to understand and
predict their characteristics as graphs. Models and algorithms for these
networks are used in real-life applications, such as search engines, and
recommender systems. In general, such networks are modelled by constructing a
low-dimensional Euclidean embedding of the vertices of the network, where
proximity of the vertices in the Euclidean space hints the likelihood of an
edge (link). In this work, we study the performance of such low-rank
representations of real-life networks on a network classification problem.
Related papers
- Topological Neural Networks: Mitigating the Bottlenecks of Graph Neural
Networks via Higher-Order Interactions [1.994307489466967]
This work starts with a theoretical framework to reveal the impact of network's width, depth, and graph topology on the over-squashing phenomena in message-passing neural networks.
The work drifts towards, higher-order interactions and multi-relational inductive biases via Topological Neural Networks.
Inspired by Graph Attention Networks, two topological attention networks are proposed: Simplicial and Cell Attention Networks.
arXiv Detail & Related papers (2024-02-10T08:26:06Z) - Defining Neural Network Architecture through Polytope Structures of Dataset [53.512432492636236]
This paper defines upper and lower bounds for neural network widths, which are informed by the polytope structure of the dataset in question.
We develop an algorithm to investigate a converse situation where the polytope structure of a dataset can be inferred from its corresponding trained neural networks.
It is established that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be efficiently encapsulated using no more than two polytopes with a small number of faces.
arXiv Detail & Related papers (2024-02-04T08:57:42Z) - Unsupervised Graph Attention Autoencoder for Attributed Networks using
K-means Loss [0.0]
We introduce a simple, efficient, and clustering-oriented model based on unsupervised textbfGraph Attention textbfAutotextbfEncoder for community detection in attributed networks.
The proposed model adeptly learns representations from both the network's topology and attribute information, simultaneously addressing dual objectives: reconstruction and community discovery.
arXiv Detail & Related papers (2023-11-21T20:45:55Z) - Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Network representation learning: A macro and micro view [9.221196170951702]
We conduct a comprehensive review of current literature on network representation learning.
Existing algorithms can be categorized into three groups: shallow embedding models, heterogeneous network embedding models, graph neural network based models.
One advantage of the survey is that we systematically study the underlying theoretical foundations underlying the different categories of algorithms.
arXiv Detail & Related papers (2021-11-21T08:58:51Z) - Towards Understanding Theoretical Advantages of Complex-Reaction
Networks [77.34726150561087]
We show that a class of functions can be approximated by a complex-reaction network using the number of parameters.
For empirical risk minimization, our theoretical result shows that the critical point set of complex-reaction networks is a proper subset of that of real-valued networks.
arXiv Detail & Related papers (2021-08-15T10:13:49Z) - Learning low-rank latent mesoscale structures in networks [1.1470070927586016]
We present a new approach for describing low-rank mesoscale structures in networks.
We use several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks.
We show how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
arXiv Detail & Related papers (2021-02-13T18:54:49Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - The impossibility of low rank representations for triangle-rich complex
networks [9.550745725703292]
We argue that such graph embeddings do notcapture salient properties of complex networks.
We mathematically prove that any embedding that can successfully create these two properties must have rank nearly linear in the number of vertices.
Among other implications, this establishes that popular embedding techniques such as Singular Value Decomposition and node2vec fail to capture significant structural aspects of real-world complex networks.
arXiv Detail & Related papers (2020-03-27T20:57:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.