Neural Architecture Transfer 2: A Paradigm for Improving Efficiency in
Multi-Objective Neural Architecture Search
- URL: http://arxiv.org/abs/2307.00960v1
- Date: Mon, 3 Jul 2023 12:25:09 GMT
- Title: Neural Architecture Transfer 2: A Paradigm for Improving Efficiency in
Multi-Objective Neural Architecture Search
- Authors: Simone Sarti, Eugenio Lomurno, Matteo Matteucci
- Abstract summary: We present NATv2, an extension of Neural Architecture Transfer (NAT) that improves multi-objective search algorithms.
NATv2 achieves qualitative improvements in the extractable sub-networks by exploiting the improved super-networks.
- Score: 7.967995669387532
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep learning is increasingly impacting various aspects of contemporary
society. Artificial neural networks have emerged as the dominant models for
solving an expanding range of tasks. The introduction of Neural Architecture
Search (NAS) techniques, which enable the automatic design of task-optimal
networks, has led to remarkable advances. However, the NAS process is typically
associated with long execution times and significant computational resource
requirements. Once-For-All (OFA) and its successor, Once-For-All-2 (OFAv2),
have been developed to mitigate these challenges. While maintaining exceptional
performance and eliminating the need for retraining, they aim to build a single
super-network model capable of directly extracting sub-networks satisfying
different constraints. Neural Architecture Transfer (NAT) was developed to
maximise the effectiveness of extracting sub-networks from a super-network. In
this paper, we present NATv2, an extension of NAT that improves multi-objective
search algorithms applied to dynamic super-network architectures. NATv2
achieves qualitative improvements in the extractable sub-networks by exploiting
the improved super-networks generated by OFAv2 and incorporating new policies
for initialisation, pre-processing and updating its networks archive. In
addition, a post-processing pipeline based on fine-tuning is introduced.
Experimental results show that NATv2 successfully improves NAT and is highly
recommended for investigating high-performance architectures with a minimal
number of parameters.
Related papers
- Towards Efficient Deep Spiking Neural Networks Construction with Spiking Activity based Pruning [17.454100169491497]
We propose a structured pruning approach based on the activity levels of convolutional kernels named Spiking Channel Activity-based (SCA) network pruning framework.
Inspired by synaptic plasticity mechanisms, our method dynamically adjusts the network's structure by pruning and regenerating convolutional kernels during training, enhancing the model's adaptation to the current target task.
arXiv Detail & Related papers (2024-06-03T07:44:37Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - SimQ-NAS: Simultaneous Quantization Policy and Neural Architecture
Search [6.121126813817338]
Recent one-shot Neural Architecture Search algorithms rely on training a hardware-agnostic super-network tailored to a specific task and then extracting efficient sub-networks for different hardware platforms.
We show that by using multi-objective search algorithms paired with lightly trained predictors, we can efficiently search for both the sub-network architecture and the corresponding quantization policy.
arXiv Detail & Related papers (2023-12-19T22:08:49Z) - OTOv3: Automatic Architecture-Agnostic Neural Network Training and
Compression from Structured Pruning to Erasing Operators [57.145175475579315]
This topic spans various techniques, from structured pruning to neural architecture search, encompassing both pruning and erasing operators perspectives.
We introduce the third-generation Only-Train-Once (OTOv3), which first automatically trains and compresses a general DNN through pruning and erasing operations.
Our empirical results demonstrate the efficacy of OTOv3 across various benchmarks in structured pruning and neural architecture search.
arXiv Detail & Related papers (2023-12-15T00:22:55Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Enhancing Once-For-All: A Study on Parallel Blocks, Skip Connections and
Early Exits [7.0895962209555465]
Once-For-All (OFA) is an eco-friendly algorithm characterised by the ability to generate easily adaptable models.
OFA is improved from an architectural point of view by including early exits, parallel blocks and dense skip connections.
OFAAv2 improves its accuracy performance on the Tiny ImageNet dataset by up to 12.07% compared to the original version of OFA.
arXiv Detail & Related papers (2023-02-03T17:53:40Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - E2-AEN: End-to-End Incremental Learning with Adaptively Expandable
Network [57.87240860624937]
We propose an end-to-end trainable adaptively expandable network named E2-AEN.
It dynamically generates lightweight structures for new tasks without any accuracy drop in previous tasks.
E2-AEN reduces cost and can be built upon any feed-forward architectures in an end-to-end manner.
arXiv Detail & Related papers (2022-07-14T09:04:51Z) - Proxyless Neural Architecture Adaptation for Supervised Learning and
Self-Supervised Learning [3.766702945560518]
We propose proxyless neural architecture adaptation that is reproducible and efficient.
Our method can be applied to both supervised learning and self-supervised learning.
arXiv Detail & Related papers (2022-05-15T02:49:48Z) - Orthogonalized SGD and Nested Architectures for Anytime Neural Networks [30.598394152055338]
Orthogonalized SGD dynamically re-balances task-specific gradients when training a multitask network.
Experiments demonstrate that training with Orthogonalized SGD significantly improves accuracy of anytime networks.
arXiv Detail & Related papers (2020-08-15T03:06:34Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.