Efficient kernel surrogates for neural network-based regression
- URL: http://arxiv.org/abs/2310.18612v2
- Date: Wed, 24 Jan 2024 11:19:11 GMT
- Title: Efficient kernel surrogates for neural network-based regression
- Authors: Saad Qadeer, Andrew Engel, Amanda Howard, Adam Tsou, Max Vargas, Panos
Stinis, and Tony Chiang
- Abstract summary: We study the performance of the Conjugate Kernel (CK), an efficient approximation to the Neural Tangent Kernel (NTK)
We show that the CK performance is only marginally worse than that of the NTK and, in certain cases, is shown to be superior.
In addition to providing a theoretical grounding for using CKs instead of NTKs, our framework suggests a recipe for improving DNN accuracy inexpensively.
- Score: 0.8030359871216615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite their immense promise in performing a variety of learning tasks, a
theoretical understanding of the limitations of Deep Neural Networks (DNNs) has
so far eluded practitioners. This is partly due to the inability to determine
the closed forms of the learned functions, making it harder to study their
generalization properties on unseen datasets. Recent work has shown that
randomly initialized DNNs in the infinite width limit converge to kernel
machines relying on a Neural Tangent Kernel (NTK) with known closed form. These
results suggest, and experimental evidence corroborates, that empirical kernel
machines can also act as surrogates for finite width DNNs. The high
computational cost of assembling the full NTK, however, makes this approach
infeasible in practice, motivating the need for low-cost approximations. In the
current work, we study the performance of the Conjugate Kernel (CK), an
efficient approximation to the NTK that has been observed to yield fairly
similar results. For the regression problem of smooth functions and logistic
regression classification, we show that the CK performance is only marginally
worse than that of the NTK and, in certain cases, is shown to be superior. In
particular, we establish bounds for the relative test losses, verify them with
numerical tests, and identify the regularity of the kernel as the key
determinant of performance. In addition to providing a theoretical grounding
for using CKs instead of NTKs, our framework suggests a recipe for improving
DNN accuracy inexpensively. We present a demonstration of this on the
foundation model GPT-2 by comparing its performance on a classification task
using a conventional approach and our prescription. We also show how our
approach can be used to improve physics-informed operator network training for
regression tasks as well as convolutional neural network training for vision
classification tasks.
Related papers
- Kernel vs. Kernel: Exploring How the Data Structure Affects Neural Collapse [9.975341265604577]
"Neural Collapse" is the decrease in the within class variability of the network's deepest features, dubbed as NC1.
We provide a kernel-based analysis that does not suffer from this limitation.
We show that the NTK does not represent more collapsed features than the NNGP for prototypical data models.
arXiv Detail & Related papers (2024-06-04T08:33:56Z) - Fixing the NTK: From Neural Network Linearizations to Exact Convex
Programs [63.768739279562105]
We show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data.
A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set.
arXiv Detail & Related papers (2023-09-26T17:42:52Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - On Feature Learning in Neural Networks with Global Convergence
Guarantees [49.870593940818715]
We study the optimization of wide neural networks (NNs) via gradient flow (GF)
We show that when the input dimension is no less than the size of the training set, the training loss converges to zero at a linear rate under GF.
We also show empirically that, unlike in the Neural Tangent Kernel (NTK) regime, our multi-layer model exhibits feature learning and can achieve better generalization performance than its NTK counterpart.
arXiv Detail & Related papers (2022-04-22T15:56:43Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Finite Versus Infinite Neural Networks: an Empirical Study [69.07049353209463]
kernel methods outperform fully-connected finite-width networks.
Centered and ensembled finite networks have reduced posterior variance.
Weight decay and the use of a large learning rate break the correspondence between finite and infinite networks.
arXiv Detail & Related papers (2020-07-31T01:57:47Z) - When and why PINNs fail to train: A neural tangent kernel perspective [2.1485350418225244]
We derive the Neural Tangent Kernel (NTK) of PINNs and prove that, under appropriate conditions, it converges to a deterministic kernel that stays constant during training in the infinite-width limit.
We find a remarkable discrepancy in the convergence rate of the different loss components contributing to the total training error.
We propose a novel gradient descent algorithm that utilizes the eigenvalues of the NTK to adaptively calibrate the convergence rate of the total training error.
arXiv Detail & Related papers (2020-07-28T23:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.