Proving Linear Mode Connectivity of Neural Networks via Optimal
Transport
- URL: http://arxiv.org/abs/2310.19103v2
- Date: Fri, 1 Mar 2024 18:45:38 GMT
- Title: Proving Linear Mode Connectivity of Neural Networks via Optimal
Transport
- Authors: Damien Ferbach, Baptiste Goujaud, Gauthier Gidel, Aymeric Dieuleveut
- Abstract summary: We provide a framework theoretically explaining this empirical observation.
We show how the support weight distribution neurons, which dictates Wasserstein convergence rates is correlated with mode connectivity.
- Score: 27.794244660649085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The energy landscape of high-dimensional non-convex optimization problems is
crucial to understanding the effectiveness of modern deep neural network
architectures. Recent works have experimentally shown that two different
solutions found after two runs of a stochastic training are often connected by
very simple continuous paths (e.g., linear) modulo a permutation of the
weights. In this paper, we provide a framework theoretically explaining this
empirical observation. Based on convergence rates in Wasserstein distance of
empirical measures, we show that, with high probability, two wide enough
two-layer neural networks trained with stochastic gradient descent are linearly
connected. Additionally, we express upper and lower bounds on the width of each
layer of two deep neural networks with independent neuron weights to be
linearly connected. Finally, we empirically demonstrate the validity of our
approach by showing how the dimension of the support of the weight distribution
of neurons, which dictates Wasserstein convergence rates is correlated with
linear mode connectivity.
Related papers
- Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with
Linear Convergence Rates [7.094295642076582]
Mean-field regime is a theoretically attractive alternative to the NTK (lazy training) regime.
We establish a new linear convergence result for two-layer neural networks trained by continuous-time noisy descent in the mean-field regime.
arXiv Detail & Related papers (2022-05-19T21:05:40Z) - Training invariances and the low-rank phenomenon: beyond linear networks [44.02161831977037]
We show that when one trains a deep linear network with logistic or exponential loss on linearly separable data, the weights converge to rank-$1$ matrices.
This is the first time a low-rank phenomenon is proven rigorously for nonlinear ReLU-activated feedforward networks.
Our proof relies on a specific decomposition of the network into a multilinear function and another ReLU network whose weights are constant under a certain parameter directional convergence.
arXiv Detail & Related papers (2022-01-28T07:31:19Z) - Convex Analysis of the Mean Field Langevin Dynamics [49.66486092259375]
convergence rate analysis of the mean field Langevin dynamics is presented.
$p_q$ associated with the dynamics allows us to develop a convergence theory parallel to classical results in convex optimization.
arXiv Detail & Related papers (2022-01-25T17:13:56Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Linear approximability of two-layer neural networks: A comprehensive
analysis based on spectral decay [4.042159113348107]
We first consider the case of single neuron and show that the linear approximability, quantified by the Kolmogorov width, is controlled by the eigenvalue decay of an associate kernel.
We show that similar results also hold for two-layer neural networks.
arXiv Detail & Related papers (2021-08-10T23:30:29Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Optimizing Mode Connectivity via Neuron Alignment [84.26606622400423]
Empirically, the local minima of loss functions can be connected by a learned curve in model space along which the loss remains nearly constant.
We propose a more general framework to investigate effect of symmetry on landscape connectivity by accounting for the weight permutations of networks being connected.
arXiv Detail & Related papers (2020-09-05T02:25:23Z) - Global Convergence of Second-order Dynamics in Two-layer Neural Networks [10.415177082023389]
Recent results have shown that for two-layer fully connected neural networks, gradient flow converges to a global optimum in the infinite width limit.
We show that the answer is positive for the heavy ball method.
While our results are functional in the mean field limit, numerical simulations indicate that global convergence may already occur for reasonably small networks.
arXiv Detail & Related papers (2020-07-14T07:01:57Z) - Revealing the Structure of Deep Neural Networks via Convex Duality [70.15611146583068]
We study regularized deep neural networks (DNNs) and introduce a convex analytic framework to characterize the structure of hidden layers.
We show that a set of optimal hidden layer weights for a norm regularized training problem can be explicitly found as the extreme points of a convex set.
We apply the same characterization to deep ReLU networks with whitened data and prove the same weight alignment holds.
arXiv Detail & Related papers (2020-02-22T21:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.