Partition and Code: learning how to compress graphs
- URL: http://arxiv.org/abs/2107.01952v1
- Date: Mon, 5 Jul 2021 11:41:16 GMT
- Title: Partition and Code: learning how to compress graphs
- Authors: Giorgos Bouritsas, Andreas Loukas, Nikolaos Karalias, Michael M.
Bronstein
- Abstract summary: "Partition and Code" framework entails three steps: first, a partitioning algorithm decomposes the graph into elementary structures, then these are mapped to the elements of a small dictionary on which we learn a probability distribution, and finally, an entropy encoder translates the representation into bits.
Our algorithms are quantitatively evaluated on diverse real-world networks obtaining significant performance improvements with respect to different families of non-parametric and parametric graph compressor.
- Score: 50.29024357495154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Can we use machine learning to compress graph data? The absence of ordering
in graphs poses a significant challenge to conventional compression algorithms,
limiting their attainable gains as well as their ability to discover relevant
patterns. On the other hand, most graph compression approaches rely on
domain-dependent handcrafted representations and cannot adapt to different
underlying graph distributions. This work aims to establish the necessary
principles a lossless graph compression method should follow to approach the
entropy storage lower bound. Instead of making rigid assumptions about the
graph distribution, we formulate the compressor as a probabilistic model that
can be learned from data and generalise to unseen instances. Our "Partition and
Code" framework entails three steps: first, a partitioning algorithm decomposes
the graph into elementary structures, then these are mapped to the elements of
a small dictionary on which we learn a probability distribution, and finally,
an entropy encoder translates the representation into bits. All three steps are
parametric and can be trained with gradient descent. We theoretically compare
the compression quality of several graph encodings and prove, under mild
conditions, a total ordering of their expected description lengths. Moreover,
we show that, under the same conditions, PnC achieves compression gains w.r.t.
the baselines that grow either linearly or quadratically with the number of
vertices. Our algorithms are quantitatively evaluated on diverse real-world
networks obtaining significant performance improvements with respect to
different families of non-parametric and parametric graph compressors.
Related papers
- Differentiable Proximal Graph Matching [40.41380102260085]
We introduce an algorithm for graph matching based on the proximal operator, referred to as differentiable proximal graph matching (DPGM)
The whole algorithm can be considered as a differentiable map from the graph affinity matrix to the prediction of node correspondence.
Numerical experiments show that PGM outperforms existing graph matching algorithms on diverse datasets.
arXiv Detail & Related papers (2024-05-26T08:17:13Z) - Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Graph Pooling with Maximum-Weight $k$-Independent Sets [12.251091325930837]
We introduce a graph coarsening mechanism based on the graph-theoretic concept of maximum-weight $k$-independent sets.
We prove theoretical guarantees for distortion bounds on path lengths, as well as the ability to preserve key topological properties in the coarsened graphs.
arXiv Detail & Related papers (2022-08-06T14:12:47Z) - Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations [94.41860307845812]
Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
arXiv Detail & Related papers (2022-01-04T15:49:18Z) - Intrusion-Free Graph Mixup [33.07540841212878]
We present a simple and yet effective regularization technique to improve the generalization of Graph Neural Networks (GNNs)
We leverage the recent advances in Mixup regularizer for vision and text, where random sample pairs and their labels are interpolated to create synthetic samples for training.
Our method can effectively regularize the graph classification learning, resulting in superior predictive accuracy over popular graph augmentation baselines.
arXiv Detail & Related papers (2021-10-18T14:16:00Z) - Learning Graphon Autoencoders for Generative Graph Modeling [91.32624399902755]
Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
arXiv Detail & Related papers (2021-05-29T08:11:40Z) - Graph Coarsening with Neural Networks [8.407217618651536]
We propose a framework for measuring the quality of coarsening algorithm and show that depending on the goal, we need to carefully choose the Laplace operator on the coarse graph.
Motivated by the observation that the current choice of edge weight for the coarse graph may be sub-optimal, we parametrize the weight assignment map with graph neural networks and train it to improve the coarsening quality in an unsupervised way.
arXiv Detail & Related papers (2021-02-02T06:50:07Z) - Learning Graphons via Structured Gromov-Wasserstein Barycenters [143.42601038462965]
We propose a novel and principled method to learn a nonparametric graph model called graphon.
The proposed approach overcomes drawbacks of prior state-of-the-art methods, and outperforms them on both synthetic and real-world data.
arXiv Detail & Related papers (2020-12-10T13:04:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.