Searching for Stage-wise Neural Graphs In the Limit
- URL: http://arxiv.org/abs/1912.12860v1
- Date: Mon, 30 Dec 2019 09:17:23 GMT
- Title: Searching for Stage-wise Neural Graphs In the Limit
- Authors: Xin Zhou, Dejing Dou, Boyang Li
- Abstract summary: A graphon is the limit of Cauchy sequence of graphs and a scale-free probabilistic distribution.
We develop theoretically motivated techniques that search for and scale up small-capacity stage-wise graphs found on small datasets to large-capacity graphs that can handle ImageNet.
- Score: 38.929579739825066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Search space is a key consideration for neural architecture search. Recently,
Xie et al. (2019) found that randomly generated networks from the same
distribution perform similarly, which suggests we should search for random
graph distributions instead of graphs. We propose graphon as a new search
space. A graphon is the limit of Cauchy sequence of graphs and a scale-free
probabilistic distribution, from which graphs of different number of nodes can
be drawn. By utilizing properties of the graphon space and the associated
cut-distance metric, we develop theoretically motivated techniques that search
for and scale up small-capacity stage-wise graphs found on small datasets to
large-capacity graphs that can handle ImageNet. The scaled stage-wise graphs
outperform DenseNet and randomly wired Watts-Strogatz networks, indicating the
benefits of graphon theory in NAS applications.
Related papers
- Graphon Mixtures [6.822247359790484]
Social networks have a small number of large hubs, and a large number of small dense communities.<n>We propose a generative model that captures both hub and dense structures.<n>We show theoretically that we can estimate the normalized degree of the hubs, as well as estimate the graphon corresponding to sparse components of graph mixtures.
arXiv Detail & Related papers (2025-05-20T03:19:29Z) - Graphon Pooling for Reducing Dimensionality of Signals and Convolutional
Operators on Graphs [131.53471236405628]
We present three methods that exploit the induced graphon representation of graphs and graph signals on partitions of [0, 1]2 in the graphon space.
We prove that those low dimensional representations constitute a convergent sequence of graphs and graph signals.
We observe that graphon pooling performs significantly better than other approaches proposed in the literature when dimensionality reduction ratios between layers are large.
arXiv Detail & Related papers (2022-12-15T22:11:34Z) - Joint Network Topology Inference via a Shared Graphon Model [24.077455621015552]
We consider the problem of estimating the topology of multiple networks from nodal observations.
We adopt a graphon as our random graph model, which is a nonparametric model from which graphs of potentially different sizes can be drawn.
arXiv Detail & Related papers (2022-09-17T02:38:58Z) - Graphon-aided Joint Estimation of Multiple Graphs [24.077455621015552]
We consider the problem of estimating the topology of multiple networks from nodal observations.
We adopt a graphon as our random graph model, which is a nonparametric model from which graphs of potentially different sizes can be drawn.
arXiv Detail & Related papers (2022-02-11T15:20:44Z) - Neighborhood Random Walk Graph Sampling for Regularized Bayesian Graph
Convolutional Neural Networks [0.6236890292833384]
In this paper, we propose a novel algorithm called Bayesian Graph Convolutional Network using Neighborhood Random Walk Sampling (BGCN-NRWS)
BGCN-NRWS uses a Markov Chain Monte Carlo (MCMC) based graph sampling algorithm utilizing graph structure, reduces overfitting by using a variational inference layer, and yields consistently competitive classification results compared to the state-of-the-art in semi-supervised node classification.
arXiv Detail & Related papers (2021-12-14T20:58:27Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Learning Graphon Autoencoders for Generative Graph Modeling [91.32624399902755]
Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
arXiv Detail & Related papers (2021-05-29T08:11:40Z) - Graph and graphon neural network stability [122.06927400759021]
Graph networks (GNNs) are learning architectures that rely on knowledge of the graph structure to generate meaningful representations of network data.
We analyze GNN stability using kernel objects called graphons.
arXiv Detail & Related papers (2020-10-23T16:55:56Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - The Power of Graph Convolutional Networks to Distinguish Random Graph
Models: Short Version [27.544219236164764]
Graph convolutional networks (GCNs) are a widely used method for graph representation learning.
We investigate the power of GCNs to distinguish between different random graph models on the basis of the embeddings of their sample graphs.
arXiv Detail & Related papers (2020-02-13T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.