A Novel Structured Natural Gradient Descent for Deep Learning
- URL: http://arxiv.org/abs/2109.10100v1
- Date: Tue, 21 Sep 2021 11:12:10 GMT
- Title: A Novel Structured Natural Gradient Descent for Deep Learning
- Authors: Weihua Liu, Xiabi Liu
- Abstract summary: We reconstruct the structure of the deep neural network, and optimize the new network using traditional gradient descent (GD)
Experimental results show that our optimization method can accelerate the convergence of deep network models and achieve better performance than GD.
- Score: 3.0686953242470794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural gradient descent (NGD) provided deep insights and powerful tools to
deep neural networks. However the computation of Fisher information matrix
becomes more and more difficult as the network structure turns large and
complex. This paper proposes a new optimization method whose main idea is to
accurately replace the natural gradient optimization by reconstructing the
network. More specifically, we reconstruct the structure of the deep neural
network, and optimize the new network using traditional gradient descent (GD).
The reconstructed network achieves the effect of the optimization way with
natural gradient descent. Experimental results show that our optimization
method can accelerate the convergence of deep network models and achieve better
performance than GD while sharing its computational simplicity.
Related papers
- Reconstructing Deep Neural Networks: Unleashing the Optimization Potential of Natural Gradient Descent [12.00557940490703]
We propose a novel optimization method for training deep neural networks called structured natural gradient descent (SNGD)
Our proposed method has the potential to significantly improve the scalability and efficiency of NGD in deep learning applications.
arXiv Detail & Related papers (2024-12-10T11:57:47Z) - Robust Neural Pruning with Gradient Sampling Optimization for Residual Neural Networks [0.0]
This research embarks on pioneering the integration of gradient sampling optimization techniques, particularly StochGradAdam, into the pruning process of neural networks.
Our main objective is to address the significant challenge of maintaining accuracy in pruned neural models, critical in resource-constrained scenarios.
arXiv Detail & Related papers (2023-12-26T12:19:22Z) - Large-scale global optimization of ultra-high dimensional non-convex
landscapes based on generative neural networks [0.0]
We present an algorithm manage ultra-high dimensional optimization.
based on a deep generative network.
We show that our method performs better with fewer function evaluations compared to state-of-the-art algorithm.
arXiv Detail & Related papers (2023-07-09T00:05:59Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - EvoPruneDeepTL: An Evolutionary Pruning Model for Transfer Learning
based Deep Neural Networks [15.29595828816055]
We propose an evolutionary pruning model for Transfer Learning based Deep Neural Networks.
EvoPruneDeepTL replaces the last fully-connected layers with sparse layers optimized by a genetic algorithm.
Results show the contribution of EvoPruneDeepTL and feature selection to the overall computational efficiency of the network.
arXiv Detail & Related papers (2022-02-08T13:07:55Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Analytically Tractable Inference in Deep Neural Networks [0.0]
Tractable Approximate Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks.
We are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures.
arXiv Detail & Related papers (2021-03-09T14:51:34Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.