Rethinking Influence Functions of Neural Networks in the
Over-parameterized Regime
- URL: http://arxiv.org/abs/2112.08297v1
- Date: Wed, 15 Dec 2021 17:44:00 GMT
- Title: Rethinking Influence Functions of Neural Networks in the
Over-parameterized Regime
- Authors: Rui Zhang, Shihua Zhang
- Abstract summary: influence function (IF) is designed to measure the effect of removing a single training point on neural networks.
We use the neural tangent kernel (NTK) theory to calculate IF for the neural network trained with regularized mean-square loss.
- Score: 12.501827786462924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the black-box prediction for neural networks is challenging. To
achieve this, early studies have designed influence function (IF) to measure
the effect of removing a single training point on neural networks. However, the
classic implicit Hessian-vector product (IHVP) method for calculating IF is
fragile, and theoretical analysis of IF in the context of neural networks is
still lacking. To this end, we utilize the neural tangent kernel (NTK) theory
to calculate IF for the neural network trained with regularized mean-square
loss, and prove that the approximation error can be arbitrarily small when the
width is sufficiently large for two-layer ReLU networks. We analyze the error
bound for the classic IHVP method in the over-parameterized regime to
understand when and why it fails or not. In detail, our theoretical analysis
reveals that (1) the accuracy of IHVP depends on the regularization term, and
is pretty low under weak regularization; (2) the accuracy of IHVP has a
significant correlation with the probability density of corresponding training
points. We further borrow the theory from NTK to understand the IFs better,
including quantifying the complexity for influential samples and depicting the
variation of IFs during the training dynamics. Numerical experiments on
real-world data confirm our theoretical results and demonstrate our findings.
Related papers
- Divergence of Empirical Neural Tangent Kernel in Classification Problems [0.0]
In classification problems, fully connected neural networks (FCNs) and residual neural networks (ResNets) cannot be approximated by kernel logistic regression based on the Neural Tangent Kernel (NTK)
We show that the empirical NTK does not uniformly converge to the NTK across all times on the training samples as the network width increases.
arXiv Detail & Related papers (2025-04-15T12:30:21Z) - Uncertainty propagation in feed-forward neural network models [3.987067170467799]
We develop new uncertainty propagation methods for feed-forward neural network architectures.
We derive analytical expressions for the probability density function (PDF) of the neural network output.
A key finding is that an appropriate linearization of the leaky ReLU activation function yields accurate statistical results.
arXiv Detail & Related papers (2025-03-27T00:16:36Z) - SGD method for entropy error function with smoothing l0 regularization for neural networks [3.108634881604788]
entropy error function has been widely used in neural networks.
We propose a novel entropy function with smoothing l0 regularization for feed-forward neural networks.
Our work is novel as it enables neural networks to learn effectively, producing more accurate predictions.
arXiv Detail & Related papers (2024-05-28T19:54:26Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Learning Theory of Distribution Regression with Neural Networks [6.961253535504979]
We establish an approximation theory and a learning theory of distribution regression via a fully connected neural network (FNN)
In contrast to the classical regression methods, the input variables of distribution regression are probability measures.
arXiv Detail & Related papers (2023-07-07T09:49:11Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - A Kernel-Expanded Stochastic Neural Network [10.837308632004644]
Deep neural network often gets trapped into a local minimum in training.
New kernel-expanded neural network (K-StoNet) model reformulates the network as a latent variable model.
Model can be easily trained using the imputationregularized optimization (IRO) algorithm.
arXiv Detail & Related papers (2022-01-14T06:42:42Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Learning and Generalization in Overparameterized Normalizing Flows [13.074242275886977]
Normalizing flows (NFs) constitute an important class of models in unsupervised learning.
We provide theoretical and empirical evidence that for a class of NFs containing most of the existing NF models, overparametrization hurts training.
We prove that unconstrained NFs can efficiently learn any reasonable data distribution under minimal assumptions when the underlying network is overparametrized.
arXiv Detail & Related papers (2021-06-19T17:11:42Z) - Persistent Homology Captures the Generalization of Neural Networks
Without A Validation Set [0.0]
We suggest studying the training of neural networks with Algebraic Topology, specifically Persistent Homology.
Using simplicial complex representations of neural networks, we study the PH diagram distance evolution on the neural network learning process.
Results show that the PH diagram distance between consecutive neural network states correlates with the validation accuracy.
arXiv Detail & Related papers (2021-05-31T09:17:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.