Random Edge Coding: One-Shot Bits-Back Coding of Large Labeled Graphs
- URL: http://arxiv.org/abs/2305.09705v1
- Date: Tue, 16 May 2023 12:23:18 GMT
- Title: Random Edge Coding: One-Shot Bits-Back Coding of Large Labeled Graphs
- Authors: Daniel Severo, James Townsend, Ashish Khisti, Alireza Makhzani
- Abstract summary: We present a one-shot method for compressing large labeled graphs called Random Edge Coding.
Experiments indicate Random Edge Coding can achieve competitive compression performance on real-world network datasets.
- Score: 24.761152163389735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a one-shot method for compressing large labeled graphs called
Random Edge Coding. When paired with a parameter-free model based on P\'olya's
Urn, the worst-case computational and memory complexities scale quasi-linearly
and linearly with the number of observed edges, making it efficient on sparse
graphs, and requires only integer arithmetic. Key to our method is bits-back
coding, which is used to sample edges and vertices without replacement from the
edge-list in a way that preserves the structure of the graph. Optimality is
proven under a class of random graph models that are invariant to permutations
of the edges and of vertices within an edge. Experiments indicate Random Edge
Coding can achieve competitive compression performance on real-world network
datasets and scales to graphs with millions of nodes and edges.
Related papers
- RandAlign: A Parameter-Free Method for Regularizing Graph Convolutional Networks [13.83680253264399]
We propose RandAlign, a regularization method for graph convolutional networks.
The idea of RandAlign is to randomly align the learned embedding for each node with that of the previous layer.
We experimentally evaluate RandAlign on different graph domain tasks on seven benchmark datasets.
arXiv Detail & Related papers (2024-04-15T13:28:13Z) - Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - Edge-Parallel Graph Encoder Embedding [0.0]
One-Hot Graph Embedding (GEE) uses a single, linear pass over edges and produces an embedding that convergesally to the spectral embedding.
We present a parallel program in the Ligra graph engine that maps functions over the edges of the graph and uses lock-free atomic instrutions to prevent data races.
arXiv Detail & Related papers (2024-02-06T21:04:57Z) - Bring Your Own View: Graph Neural Networks for Link Prediction with
Personalized Subgraph Selection [57.34881616131377]
We introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges.
PS2 is instantiated as a bi-level optimization problem that can be efficiently solved differently.
We suggest a brand-new angle towards GNNLP training: by first identifying the optimal subgraphs for edges; and then focusing on training the inference model by using the sampled subgraphs.
arXiv Detail & Related papers (2022-12-23T17:30:19Z) - Kernelized multi-graph matching [0.0]
We introduce a novel kernelized multigraph matching technique that handles both the attributes of the pair and the edges of the graphs.
We propose several projectors leading to improved stability of the results.
arXiv Detail & Related papers (2022-10-11T07:22:47Z) - SoftEdge: Regularizing Graph Classification with Random Soft Edges [18.165965620873745]
Graph data augmentation plays a vital role in regularizing Graph Neural Networks (GNNs)
Simple edge and node manipulations can create graphs with an identical structure or indistinguishable structures to message passing GNNs but of conflict labels.
We propose SoftEdge, which assigns random weights to a portion of the edges of a given graph to construct dynamic neighborhoods over the graph.
arXiv Detail & Related papers (2022-04-21T20:12:36Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Sketch-Based Anomaly Detection in Streaming Graphs [89.52200264469364]
Given a stream of graph edges from a dynamic graph, how can we assign anomaly scores to edges and subgraphs in an online manner?
Our method is the first streaming approach that incorporates dense subgraph search to detect graph anomalies in constant memory and time.
arXiv Detail & Related papers (2021-06-08T16:10:36Z) - Online Dense Subgraph Discovery via Blurred-Graph Feedback [87.9850024070244]
We introduce a novel learning problem for dense subgraph discovery.
We first propose a edge-time algorithm that obtains a nearly-optimal solution with high probability.
We then design a more scalable algorithm with a theoretical guarantee.
arXiv Detail & Related papers (2020-06-24T11:37:33Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.