Neural Architecture Search via Bregman Iterations
- URL: http://arxiv.org/abs/2106.02479v1
- Date: Fri, 4 Jun 2021 13:37:47 GMT
- Title: Neural Architecture Search via Bregman Iterations
- Authors: Leon Bungert, Tim Roith, Daniel Tenbrinck, Martin Burger
- Abstract summary: We propose a novel strategy for Neural Architecture Search (NAS) based on Bregman iterations.
Our gradient-based one-shot algorithm gradually adds relevant parameters in an inverse scale space manner.
- Score: 1.7778609937758323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel strategy for Neural Architecture Search (NAS) based on
Bregman iterations. Starting from a sparse neural network our gradient-based
one-shot algorithm gradually adds relevant parameters in an inverse scale space
manner. This allows the network to choose the best architecture in the search
space which makes it well-designed for a given task, e.g., by adding neurons or
skip connections. We demonstrate that using our approach one can unveil, for
instance, residual autoencoders for denoising, deblurring, and classification
tasks. Code is available at https://github.com/TimRoith/BregmanLearning.
Related papers
- Simultaneous Weight and Architecture Optimization for Neural Networks [6.2241272327831485]
We introduce a novel neural network training framework that transforms the process by learning architecture and parameters simultaneously with gradient descent.
Central to our approach is a multi-scale encoder-decoder, in which the encoder embeds pairs of neural networks with similar functionalities close to each other.
Experiments demonstrate that our framework can discover sparse and compact neural networks maintaining a high performance.
arXiv Detail & Related papers (2024-10-10T19:57:36Z) - Latent Space Representations of Neural Algorithmic Reasoners [15.920449080528536]
We perform a detailed analysis of the structure of the latent space induced by the GNN when executing algorithms.
We identify two possible failure modes: (i) loss of resolution, making it hard to distinguish similar values; (ii) inability to deal with values outside the range observed during training.
We show that these changes lead to improvements on the majority of algorithms in the standard CLRS-30 benchmark when using the state-of-the-art Triplet-GMPNN processor.
arXiv Detail & Related papers (2023-07-17T22:09:12Z) - OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - Pathfinding Neural Cellular Automata [23.831530224401575]
Pathfinding is an important sub-component of a broad range of complex AI tasks, such as robot path planning, transport routing, and game playing.
We hand-code and learn models for Breadth-First Search (BFS), i.e. shortest path finding.
We present a neural implementation of Depth-First Search (DFS), and outline how it can be combined with neural BFS to produce an NCA for computing diameter of a graph.
We experiment with architectural modifications inspired by these hand-coded NCAs, training networks from scratch to solve the diameter problem on grid mazes while exhibiting strong ability generalization
arXiv Detail & Related papers (2023-01-17T11:45:51Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Neural network relief: a pruning algorithm based on neural activity [47.57448823030151]
We propose a simple importance-score metric that deactivates unimportant connections.
We achieve comparable performance for LeNet architectures on MNIST.
The algorithm is not designed to minimize FLOPs when considering current hardware and software implementations.
arXiv Detail & Related papers (2021-09-22T15:33:49Z) - D-DARTS: Distributed Differentiable Architecture Search [75.12821786565318]
Differentiable ARchiTecture Search (DARTS) is one of the most trending Neural Architecture Search (NAS) methods.
We propose D-DARTS, a novel solution that addresses this problem by nesting several neural networks at cell-level.
arXiv Detail & Related papers (2021-08-20T09:07:01Z) - A Bregman Learning Framework for Sparse Neural Networks [1.7778609937758323]
We propose a learning framework based on Bregman iterations to train sparse neural networks.
We derive a baseline algorithm called LinBreg, an accelerated version using momentum, and AdaBreg, which is a Bregmanized generalization of the Adam algorithm.
arXiv Detail & Related papers (2021-05-10T12:56:01Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - Neural Architecture Search as Sparse Supernet [78.09905626281046]
This paper aims at enlarging the problem of Neural Architecture Search (NAS) from Single-Path and Multi-Path Search to automated Mixed-Path Search.
We model the NAS problem as a sparse supernet using a new continuous architecture representation with a mixture of sparsity constraints.
The sparse supernet enables us to automatically achieve sparsely-mixed paths upon a compact set of nodes.
arXiv Detail & Related papers (2020-07-31T14:51:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.