Continual Learning with Extended Kronecker-factored Approximate
Curvature
- URL: http://arxiv.org/abs/2004.07507v1
- Date: Thu, 16 Apr 2020 07:58:47 GMT
- Title: Continual Learning with Extended Kronecker-factored Approximate
Curvature
- Authors: Janghyeon Lee, Hyeong Gwon Hong, Donggyu Joo, Junmo Kim
- Abstract summary: We propose a quadratic penalty method for continual learning of neural networks that contain batch normalization layers.
A Kronecker-factored approximate curvature (K-FAC) is used widely to practically compute the Hessian of a neural network.
We extend the K-FAC method so that the inter-example relations are taken into account and the Hessian of deep neural networks can be properly approximated.
- Score: 33.44290346786496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a quadratic penalty method for continual learning of neural
networks that contain batch normalization (BN) layers. The Hessian of a loss
function represents the curvature of the quadratic penalty function, and a
Kronecker-factored approximate curvature (K-FAC) is used widely to practically
compute the Hessian of a neural network. However, the approximation is not
valid if there is dependence between examples, typically caused by BN layers in
deep network architectures. We extend the K-FAC method so that the
inter-example relations are taken into account and the Hessian of deep neural
networks can be properly approximated under practical assumptions. We also
propose a method of weight merging and reparameterization to properly handle
statistical parameters of BN, which plays a critical role for continual
learning with BN, and a method that selects hyperparameters without source task
data. Our method shows better performance than baselines in the permuted MNIST
task with BN layers and in sequential learning from the ImageNet classification
task to fine-grained classification tasks with ResNet-50, without any explicit
or implicit use of source task data for hyperparameter selection.
Related papers
- Concurrent Training and Layer Pruning of Deep Neural Networks [0.0]
We propose an algorithm capable of identifying and eliminating irrelevant layers of a neural network during the early stages of training.
We employ a structure using residual connections around nonlinear network sections that allow the flow of information through the network once a nonlinear section is pruned.
arXiv Detail & Related papers (2024-06-06T23:19:57Z) - Kronecker-Factored Approximate Curvature for Physics-Informed Neural Networks [3.7308074617637588]
We propose Kronecker-factored approximate curvature (KFAC) for PINN losses that greatly reduces the computational cost and allows scaling to much larger networks.
We find that our KFAC-based gradients are competitive with expensive second-order methods on small problems, scale more favorably to higher-dimensional neural networks and PDEs, and consistently outperform first-order methods and LBFGS.
arXiv Detail & Related papers (2024-05-24T14:36:02Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - Critical Initialization of Wide and Deep Neural Networks through Partial
Jacobians: General Theory and Applications [6.579523168465526]
We introduce emphpartial Jacobians of a network, defined as derivatives of preactivations in layer $l$ with respect to preactivations in layer $l_0leq l$.
We derive recurrence relations for the norms of partial Jacobians and utilize these relations to analyze criticality of deep fully connected neural networks with LayerNorm and/or residual connections.
arXiv Detail & Related papers (2021-11-23T20:31:42Z) - Proxy Convexity: A Unified Framework for the Analysis of Neural Networks
Trained by Gradient Descent [95.94432031144716]
We propose a unified non- optimization framework for the analysis of a learning network.
We show that existing guarantees can be trained unified through gradient descent.
arXiv Detail & Related papers (2021-06-25T17:45:00Z) - Spline parameterization of neural network controls for deep learning [0.0]
We choose a fixed number of B-spline basis functions whose coefficients are the trainable parameters of the neural network.
We numerically show that the spline-based neural network increases robustness of the learning problem towards hyper parameters.
arXiv Detail & Related papers (2021-02-27T19:35:45Z) - Finite Versus Infinite Neural Networks: an Empirical Study [69.07049353209463]
kernel methods outperform fully-connected finite-width networks.
Centered and ensembled finite networks have reduced posterior variance.
Weight decay and the use of a large learning rate break the correspondence between finite and infinite networks.
arXiv Detail & Related papers (2020-07-31T01:57:47Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.