A Prufer-Sequence Based Representation of Large Graphs for Structural
Encoding of Logic Networks
- URL: http://arxiv.org/abs/2209.01596v1
- Date: Sun, 4 Sep 2022 11:24:19 GMT
- Title: A Prufer-Sequence Based Representation of Large Graphs for Structural
Encoding of Logic Networks
- Authors: Manjari Pradhan and Bhargab B. Bhattacharya
- Abstract summary: In this paper, we are primarily concerned with the inference that the structure of the graph influences the property of the real life system it represents.
A model of such structural influence would be useful in inferencing useful properties of complex and large systems, like VLSI circuits.
- Score: 0.30458514384586405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The pervasiveness of graphs in today's real life systems is quite evident,
where the system either explicitly exists as graph or can be readily modelled
as one. Such graphical structure is thus a store house rich information. This
has various implication depending on whether we are interested in a node or the
graph as a whole. In this paper, we are primarily concerned with the later,
that is, the inference that the structure of the graph influences the property
of the real life system it represents. A model of such structural influence
would be useful in inferencing useful properties of complex and large systems,
like VLSI circuits, through its structural property. However, before we can
apply some machine learning (ML) based technique to model such relationship, an
effective representation of the graph is imperative. In this paper, we propose
a graph representation which is lossless, linear-sized in terms of number of
vertices and gives a 1-D representation of the graph. Our representation is
based on Prufer encoding for trees. Moreover, our method is based on a novel
technique, called $\mathcal{GT}$-enhancement whereby we first transform the
graph such that it can be represented by a singular tree. The encoding also
provides scope to include additional graph property and improve the
interpretability of the code.
Related papers
- Explainable Graph Representation Learning via Graph Pattern Analysis [33.539251667469294]
We introduce a framework for learning and explaining graph representations through graph pattern analysis.<n>We show how to learn and explain graph representations for real-world data using pattern analysis.
arXiv Detail & Related papers (2025-12-04T07:25:01Z) - Data-Adaptive Graph Framelets with Generalized Vanishing Moments for Graph Machine Learning [4.498623132806496]
We construct tight framelet systems on graphs with localized supports based on partition trees.<n>We exploit the generality of our proposed graph framelet systems for heterophilous graph learning.<n>We derive a specific system of graph framelets and propose a method to select framelets as features for neural network input.
arXiv Detail & Related papers (2023-09-07T07:49:43Z) - SynGraphy: Succinct Summarisation of Large Networks via Small Synthetic
Representative Graphs [4.550112751061436]
We describe SynGraphy, a method for visually summarising the structure of large network datasets.
It works by drawing smaller graphs generated to have similar structural properties to the input graphs.
arXiv Detail & Related papers (2023-02-15T16:00:15Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Generating a Doppelganger Graph: Resembling but Distinct [5.618335078130568]
We propose an approach to generating a doppelganger graph that resembles a given one in many graph properties.
The approach is an orchestration of graph representation learning, generative adversarial networks, and graph realization algorithms.
arXiv Detail & Related papers (2021-01-23T22:08:27Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Unsupervised Graph Representation by Periphery and Hierarchical
Information Maximization [18.7475578342125]
Invent of graph neural networks has improved the state-of-the-art for both node and the entire graph representation in a vector space.
For the entire graph representation, most of existing graph neural networks are trained on a graph classification loss in a supervised way.
We propose an unsupervised graph neural network to generate a vector representation of an entire graph in this paper.
arXiv Detail & Related papers (2020-06-08T15:50:40Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z) - Graph Signal Processing -- Part III: Machine Learning on Graphs, from
Graph Topology to Applications [19.29066508374268]
Part III of this monograph starts by addressing ways to learn graph topology.
A particular emphasis is on graph topology definition based on the correlation and precision matrices of the observed data.
For learning sparse graphs, the least absolute shrinkage and selection operator, known as LASSO is employed.
arXiv Detail & Related papers (2020-01-02T13:14:27Z) - Deep Learning for Learning Graph Representations [58.649784596090385]
Mining graph data has become a popular research topic in computer science.
The huge amount of network data has posed great challenges for efficient analysis.
This motivates the advent of graph representation which maps the graph into a low-dimension vector space.
arXiv Detail & Related papers (2020-01-02T02:13:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.