Efficient multi-relational network representation using primes
- URL: http://arxiv.org/abs/2209.06575v2
- Date: Wed, 17 May 2023 13:17:04 GMT
- Title: Efficient multi-relational network representation using primes
- Authors: Konstantinos Bougiatiotis, Georgios Paliouras
- Abstract summary: Multi-relational networks capture complex data relationships and have a variety of applications.
This paper introduces the concept of Prime Adjacency Matrices (PAMs), which utilize prime numbers, to represent the relations of the network.
We illustrate the benefits of using the proposed approach through various simple and complex network analysis tasks.
- Score: 1.6752182911522517
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose a novel representation of complex multi-relational
networks, which is compact and allows very efficient network analysis.
Multi-relational networks capture complex data relationships and have a variety
of applications, ranging from biomedical to financial, social, etc. As they get
to be used with ever larger quantities of data, it is crucial to find efficient
ways to represent and analyse such networks. This paper introduces the concept
of Prime Adjacency Matrices (PAMs), which utilize prime numbers, to represent
the relations of the network. Due to the fundamental theorem of arithmetic,
this allows for a lossless, compact representation of a complete
multi-relational graph, using a single adjacency matrix. Moreover, this
representation enables the fast computation of multi-hop adjacency matrices,
which can be useful for a variety of downstream tasks. We illustrate the
benefits of using the proposed approach through various simple and complex
network analysis tasks.
Related papers
- From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis [5.008498268411793]
Multi-relational networks capture intricate relationships in data and have diverse applications across fields such as biomedical, financial, and social sciences.
This work extends the Prime Adjacency Matrices framework, which employs prime numbers to represent distinct relations within a network uniquely.
arXiv Detail & Related papers (2024-11-17T18:43:01Z) - Relational Composition in Neural Networks: A Survey and Call to Action [54.47858085003077]
Many neural nets appear to represent data as linear combinations of "feature vectors"
We argue that this success is incomplete without an understanding of relational composition.
arXiv Detail & Related papers (2024-07-19T20:50:57Z) - Image segmentation with traveling waves in an exactly solvable recurrent
neural network [71.74150501418039]
We show that a recurrent neural network can effectively divide an image into groups according to a scene's structural characteristics.
We present a precise description of the mechanism underlying object segmentation in this network.
We then demonstrate a simple algorithm for object segmentation that generalizes across inputs ranging from simple geometric objects in grayscale images to natural images.
arXiv Detail & Related papers (2023-11-28T16:46:44Z) - Modular Blended Attention Network for Video Question Answering [1.131316248570352]
We present an approach to facilitate the question with a reusable and composable neural unit.
We have conducted experiments on three commonly used datasets.
arXiv Detail & Related papers (2023-11-02T14:22:17Z) - Exploring ordered patterns in the adjacency matrix for improving machine
learning on complex networks [0.0]
The proposed methodology employs a sorting algorithm to rearrange the elements of the adjacency matrix of a complex graph in a specific order.
The resulting sorted adjacency matrix is then used as input for feature extraction and machine learning algorithms to classify the networks.
arXiv Detail & Related papers (2023-01-20T00:01:23Z) - The Multiple Subnetwork Hypothesis: Enabling Multidomain Learning by
Isolating Task-Specific Subnetworks in Feedforward Neural Networks [0.0]
We identify a methodology and network representational structure which allows a pruned network to employ previously unused weights to learn subsequent tasks.
We show that networks trained using our approaches are able to learn multiple tasks, which may be related or unrelated, in parallel or in sequence without sacrificing performance on any task or exhibiting catastrophic forgetting.
arXiv Detail & Related papers (2022-07-18T15:07:13Z) - Learning to Coordinate via Multiple Graph Neural Networks [16.226702761758595]
MGAN is a new algorithm that combines graph convolutional networks and value-decomposition methods.
We show the amazing ability of the graph network in representation learning by visualizing the output of the graph network.
arXiv Detail & Related papers (2021-04-08T04:33:00Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Recursive Multi-model Complementary Deep Fusion forRobust Salient Object
Detection via Parallel Sub Networks [62.26677215668959]
Fully convolutional networks have shown outstanding performance in the salient object detection (SOD) field.
This paper proposes a wider'' network architecture which consists of parallel sub networks with totally different network architectures.
Experiments on several famous benchmarks clearly demonstrate the superior performance, good generalization, and powerful learning ability of the proposed wider framework.
arXiv Detail & Related papers (2020-08-07T10:39:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.