HyperS2V: A Framework for Structural Representation of Nodes in Hyper
Networks
- URL: http://arxiv.org/abs/2311.04149v1
- Date: Tue, 7 Nov 2023 17:26:31 GMT
- Title: HyperS2V: A Framework for Structural Representation of Nodes in Hyper
Networks
- Authors: Shu Liu, Cameron Lai, Fujio Toriumi
- Abstract summary: Hyper networks possess the ability to depict more complex relationships among nodes and store extensive information.
This research introduces HyperS2V, a node embedding approach that centers on the structural similarity within hyper networks.
- Score: 8.391883728680439
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In contrast to regular (simple) networks, hyper networks possess the ability
to depict more complex relationships among nodes and store extensive
information. Such networks are commonly found in real-world applications, such
as in social interactions. Learning embedded representations for nodes involves
a process that translates network structures into more simplified spaces,
thereby enabling the application of machine learning approaches designed for
vector data to be extended to network data. Nevertheless, there remains a need
to delve into methods for learning embedded representations that prioritize
structural aspects. This research introduces HyperS2V, a node embedding
approach that centers on the structural similarity within hyper networks.
Initially, we establish the concept of hyper-degrees to capture the structural
properties of nodes within hyper networks. Subsequently, a novel function is
formulated to measure the structural similarity between different hyper-degree
values. Lastly, we generate structural embeddings utilizing a multi-scale
random walk framework. Moreover, a series of experiments, both intrinsic and
extrinsic, are performed on both toy and real networks. The results underscore
the superior performance of HyperS2V in terms of both interpretability and
applicability to downstream tasks.
Related papers
- (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork [60.889175951038496]
Large-scale neural networks have demonstrated remarkable performance in different domains like vision and language processing.
One of the key questions of structural pruning is how to estimate the channel significance.
We propose a novel algorithmic framework, namely textttPASS.
It is a tailored hyper-network to take both visual prompts and network weight statistics as input, and output layer-wise channel sparsity in a recurrent manner.
arXiv Detail & Related papers (2024-07-24T16:47:45Z) - Bayesian Detection of Mesoscale Structures in Pathway Data on Graphs [0.0]
mesoscale structures are integral part of the abstraction and analysis of complex systems.
They can represent communities in social or citation networks, roles in corporate interactions, or core-periphery structures in transportation networks.
We derive a Bayesian approach that simultaneously models the optimal partitioning of nodes in groups and the optimal higher-order network dynamics.
arXiv Detail & Related papers (2023-01-16T12:45:33Z) - Parallel Machine Learning for Forecasting the Dynamics of Complex
Networks [0.0]
We present a machine learning scheme for forecasting the dynamics of large complex networks.
We use a parallel architecture that mimics the topology of the network of interest.
arXiv Detail & Related papers (2021-08-27T06:06:41Z) - Network Embedding via Deep Prediction Model [25.727377978617465]
This paper proposes a network embedding framework to capture the transfer behaviors on structured networks via deep prediction models.
A network structure embedding layer is added into conventional deep prediction models, including Long Short-Term Memory Network and Recurrent Neural Network.
Experimental studies are conducted on various datasets including social networks, citation networks, biomedical network, collaboration network and language network.
arXiv Detail & Related papers (2021-04-27T16:56:00Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - DINE: A Framework for Deep Incomplete Network Embedding [33.97952453310253]
We propose a Deep Incomplete Network Embedding method, namely DINE.
We first complete the missing part including both nodes and edges in a partially observable network by using the expectation-maximization framework.
We evaluate DINE over three networks on multi-label classification and link prediction tasks.
arXiv Detail & Related papers (2020-08-09T04:59:35Z) - Recursive Multi-model Complementary Deep Fusion forRobust Salient Object
Detection via Parallel Sub Networks [62.26677215668959]
Fully convolutional networks have shown outstanding performance in the salient object detection (SOD) field.
This paper proposes a wider'' network architecture which consists of parallel sub networks with totally different network architectures.
Experiments on several famous benchmarks clearly demonstrate the superior performance, good generalization, and powerful learning ability of the proposed wider framework.
arXiv Detail & Related papers (2020-08-07T10:39:11Z) - The impossibility of low rank representations for triangle-rich complex
networks [9.550745725703292]
We argue that such graph embeddings do notcapture salient properties of complex networks.
We mathematically prove that any embedding that can successfully create these two properties must have rank nearly linear in the number of vertices.
Among other implications, this establishes that popular embedding techniques such as Singular Value Decomposition and node2vec fail to capture significant structural aspects of real-world complex networks.
arXiv Detail & Related papers (2020-03-27T20:57:56Z) - On Infinite-Width Hypernetworks [101.03630454105621]
We show that hypernetworks do not guarantee to a global minima under descent.
We identify the functional priors of these architectures by deriving their corresponding GP and NTK kernels.
As part of this study, we make a mathematical contribution by deriving tight bounds on high order Taylor terms of standard fully connected ReLU networks.
arXiv Detail & Related papers (2020-03-27T00:50:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.