Learning Morphisms with Gauss-Newton Approximation for Growing Networks
- URL: http://arxiv.org/abs/2411.05855v1
- Date: Thu, 07 Nov 2024 01:12:42 GMT
- Title: Learning Morphisms with Gauss-Newton Approximation for Growing Networks
- Authors: Neal Lawton, Aram Galstyan, Greg Ver Steeg,
- Abstract summary: A popular method for Neural Architecture Search (NAS) is based on growing networks via small local changes to the network's architecture called network morphisms.
Here we propose a NAS method for growing a network by using a Gauss-Newton approximation of the loss function to efficiently learn and evaluate candidate network morphisms.
- Score: 43.998746572276076
- License:
- Abstract: A popular method for Neural Architecture Search (NAS) is based on growing networks via small local changes to the network's architecture called network morphisms. These methods start with a small seed network and progressively grow the network by adding new neurons in an automated way. However, it remains a challenge to efficiently determine which parts of the network are best to grow. Here we propose a NAS method for growing a network by using a Gauss-Newton approximation of the loss function to efficiently learn and evaluate candidate network morphisms. We compare our method with state of the art NAS methods for CIFAR-10 and CIFAR-100 classification tasks, and conclude our method learns similar quality or better architectures at a smaller computational cost.
Related papers
- Adaptive Neural Networks Using Residual Fitting [2.546014024559691]
We present a network-growth method that searches for explainable error in the network's residuals and grows the network if sufficient error is detected.
Within these tasks, the growing network can often achieve better performance than small networks that do not grow.
arXiv Detail & Related papers (2023-01-13T19:52:30Z) - Evolutionary Neural Cascade Search across Supernetworks [68.8204255655161]
We introduce ENCAS - Evolutionary Neural Cascade Search.
ENCAS can be used to search over multiple pretrained supernetworks.
We test ENCAS on common computer vision benchmarks.
arXiv Detail & Related papers (2022-03-08T11:06:01Z) - Efficient Transfer Learning via Joint Adaptation of Network Architecture
and Weight [66.8543732597723]
Recent worksin neural architecture search (NAS) can aid transfer learning by establishing sufficient network search space.
We propose a novel framework consisting of two modules, the neural architecturesearch module for architecture transfer and the neural weight search module for weight transfer.
These two modules conduct search on thetarget task based on a reduced super-networks, so we only need to trainonce on the source task.
arXiv Detail & Related papers (2021-05-19T08:58:04Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Evolutionary Neural Architecture Search Supporting Approximate
Multipliers [0.5414308305392761]
We propose a multi-objective NAS method based on Cartesian genetic programming for evolving convolutional neural networks (CNN)
The most suitable approximate multipliers are automatically selected from a library of approximate multipliers.
Evolved CNNs are compared with common human-created CNNs of a similar complexity on the CIFAR-10 benchmark problem.
arXiv Detail & Related papers (2021-01-28T09:26:03Z) - Channel Planting for Deep Neural Networks using Knowledge Distillation [3.0165431987188245]
We present a novel incremental training algorithm for deep neural networks called planting.
Our planting can search the optimal network architecture with smaller number of parameters for improving the network performance.
We evaluate the effectiveness of the proposed method on different datasets such as CIFAR-10/100 and STL-10.
arXiv Detail & Related papers (2020-11-04T16:29:59Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Locality Guided Neural Networks for Explainable Artificial Intelligence [12.435539489388708]
We propose a novel algorithm for back propagation, called Locality Guided Neural Network(LGNN)
LGNN preserves locality between neighbouring neurons within each layer of a deep network.
In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100.
arXiv Detail & Related papers (2020-07-12T23:45:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.