Abstract: The interplay between infinite-width neural networks (NNs) and classes of
Gaussian processes (GPs) is well known since the seminal work of Neal (1996).
While numerous theoretical refinements have been proposed in the recent years,
the interplay between NNs and GPs relies on two critical distributional
assumptions on the NN's parameters: A1) finite variance; A2) independent and
identical distribution (iid). In this paper, we consider the problem of
removing A1 in the general context of deep feed-forward convolutional NNs. In
particular, we assume iid parameters distributed according to a stable
distribution and we show that the infinite-channel limit of a deep feed-forward
convolutional NNs, under suitable scaling, is a stochastic process with
multivariate stable finite-dimensional distributions. Such a limiting
distribution is then characterized through an explicit backward recursion for
its parameters over the layers. Our contribution extends results of Favaro et
al. (2020) to convolutional architectures, and it paves the way to expand
exciting recent lines of research that rely on classes of GP limits.