On Infinite-Width Hypernetworks
- URL: http://arxiv.org/abs/2003.12193v7
- Date: Mon, 22 Feb 2021 23:10:56 GMT
- Title: On Infinite-Width Hypernetworks
- Authors: Etai Littwin, Tomer Galanti, Lior Wolf, Greg Yang
- Abstract summary: We show that hypernetworks do not guarantee to a global minima under descent.
We identify the functional priors of these architectures by deriving their corresponding GP and NTK kernels.
As part of this study, we make a mathematical contribution by deriving tight bounds on high order Taylor terms of standard fully connected ReLU networks.
- Score: 101.03630454105621
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: {\em Hypernetworks} are architectures that produce the weights of a
task-specific {\em primary network}. A notable application of hypernetworks in
the recent literature involves learning to output functional representations.
In these scenarios, the hypernetwork learns a representation corresponding to
the weights of a shallow MLP, which typically encodes shape or image
information. While such representations have seen considerable success in
practice, they remain lacking in the theoretical guarantees in the wide regime
of the standard architectures. In this work, we study wide over-parameterized
hypernetworks. We show that unlike typical architectures, infinitely wide
hypernetworks do not guarantee convergence to a global minima under gradient
descent. We further show that convexity can be achieved by increasing the
dimensionality of the hypernetwork's output, to represent wide MLPs. In the
dually infinite-width regime, we identify the functional priors of these
architectures by deriving their corresponding GP and NTK kernels, the latter of
which we refer to as the {\em hyperkernel}. As part of this study, we make a
mathematical contribution by deriving tight bounds on high order Taylor
expansion terms of standard fully connected ReLU networks.
Related papers
- Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - Hypergraph Transformer for Semi-Supervised Classification [50.92027313775934]
We propose a novel hypergraph learning framework, HyperGraph Transformer (HyperGT)
HyperGT uses a Transformer-based neural network architecture to effectively consider global correlations among all nodes and hyperedges.
It achieves comprehensive hypergraph representation learning by effectively incorporating global interactions while preserving local connectivity patterns.
arXiv Detail & Related papers (2023-12-18T17:50:52Z) - HyperS2V: A Framework for Structural Representation of Nodes in Hyper
Networks [8.391883728680439]
Hyper networks possess the ability to depict more complex relationships among nodes and store extensive information.
This research introduces HyperS2V, a node embedding approach that centers on the structural similarity within hyper networks.
arXiv Detail & Related papers (2023-11-07T17:26:31Z) - Topology-guided Hypergraph Transformer Network: Unveiling Structural Insights for Improved Representation [1.1606619391009658]
We propose a Topology-guided Hypergraph Transformer Network (THTN)
In this model, we first formulate a hypergraph from a graph while retaining its structural essence to learn higher-order relations within the graph.
We present a structure-aware self-attention mechanism that discovers the important nodes and hyperedges from both semantic and structural viewpoints.
arXiv Detail & Related papers (2023-10-14T20:08:54Z) - Network Alignment with Transferable Graph Autoencoders [79.89704126746204]
We propose a novel graph autoencoder architecture designed to extract powerful and robust node embeddings.
We prove that the generated embeddings are associated with the eigenvalues and eigenvectors of the graphs.
Our proposed framework also leverages transfer learning and data augmentation to achieve efficient network alignment at a very large scale without retraining.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - From Hypergraph Energy Functions to Hypergraph Neural Networks [94.88564151540459]
We present an expressive family of parameterized, hypergraph-regularized energy functions.
We then demonstrate how minimizers of these energies effectively serve as node embeddings.
We draw parallels between the proposed bilevel hypergraph optimization, and existing GNN architectures in common use.
arXiv Detail & Related papers (2023-06-16T04:40:59Z) - Globally Gated Deep Linear Networks [3.04585143845864]
We introduce Globally Gated Deep Linear Networks (GGDLNs) where gating units are shared among all processing units in each layer.
We derive exact equations for the generalization properties in these networks in the finite-width thermodynamic limit.
Our work is the first exact theoretical solution of learning in a family of nonlinear networks with finite width.
arXiv Detail & Related papers (2022-10-31T16:21:56Z) - Mastering Spatial Graph Prediction of Road Networks [18.321172168775472]
We propose a graph-based framework that simulates the addition of sequences of graph edges.
In particular, given a partially generated graph associated with a satellite image, an RL agent nominates modifications that maximize a cumulative reward.
arXiv Detail & Related papers (2022-10-03T11:26:09Z) - Path Regularization: A Convexity and Sparsity Inducing Regularization
for Parallel ReLU Networks [75.33431791218302]
We study the training problem of deep neural networks and introduce an analytic approach to unveil hidden convexity in the optimization landscape.
We consider a deep parallel ReLU network architecture, which also includes standard deep networks and ResNets as its special cases.
arXiv Detail & Related papers (2021-10-18T18:00:36Z) - Asymptotics of Wide Convolutional Neural Networks [18.198962344790377]
We study scaling laws for wide CNNs and networks with skip connections.
We find that the difference in performance between finite and infinite width models vanishes at a definite rate with respect to model width.
arXiv Detail & Related papers (2020-08-19T21:22:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.