NASOA: Towards Faster Task-oriented Online Fine-tuning with a Zoo of
Models
- URL: http://arxiv.org/abs/2108.03434v1
- Date: Sat, 7 Aug 2021 12:03:14 GMT
- Title: NASOA: Towards Faster Task-oriented Online Fine-tuning with a Zoo of
Models
- Authors: Hang Xu, Ning Kang, Gengwei Zhang, Chuanlong Xie, Xiaodan Liang,
Zhenguo Li
- Abstract summary: Fine-tuning from pre-trained ImageNet models has been a simple, effective, and popular approach for various computer vision tasks.
We propose a joint Neural Architecture Search and Online Adaption framework named NASOA towards a faster task-oriented fine-tuning.
- Score: 90.6485663020735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fine-tuning from pre-trained ImageNet models has been a simple, effective,
and popular approach for various computer vision tasks. The common practice of
fine-tuning is to adopt a default hyperparameter setting with a fixed
pre-trained model, while both of them are not optimized for specific tasks and
time constraints. Moreover, in cloud computing or GPU clusters where the tasks
arrive sequentially in a stream, faster online fine-tuning is a more desired
and realistic strategy for saving money, energy consumption, and CO2 emission.
In this paper, we propose a joint Neural Architecture Search and Online
Adaption framework named NASOA towards a faster task-oriented fine-tuning upon
the request of users. Specifically, NASOA first adopts an offline NAS to
identify a group of training-efficient networks to form a pretrained model zoo.
We propose a novel joint block and macro-level search space to enable a
flexible and efficient search. Then, by estimating fine-tuning performance via
an adaptive model by accumulating experience from the past tasks, an online
schedule generator is proposed to pick up the most suitable model and generate
a personalized training regime with respect to each desired task in a one-shot
fashion. The resulting model zoo is more training efficient than SOTA models,
e.g. 6x faster than RegNetY-16GF, and 1.7x faster than EfficientNetB3.
Experiments on multiple datasets also show that NASOA achieves much better
fine-tuning results, i.e. improving around 2.1% accuracy than the best
performance in RegNet series under various constraints and tasks; 40x faster
compared to the BOHB.
Related papers
- Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - A-SDM: Accelerating Stable Diffusion through Redundancy Removal and
Performance Optimization [54.113083217869516]
In this work, we first explore the computational redundancy part of the network.
We then prune the redundancy blocks of the model and maintain the network performance.
Thirdly, we propose a global-regional interactive (GRI) attention to speed up the computationally intensive attention part.
arXiv Detail & Related papers (2023-12-24T15:37:47Z) - DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models [56.584561770857306]
We propose a novel conditional Neural Architecture Generation (NAG) framework based on diffusion models, dubbed DiffusionNAG.
Specifically, we consider the neural architectures as directed graphs and propose a graph diffusion model for generating them.
We validate the effectiveness of DiffusionNAG through extensive experiments in two predictor-based NAS scenarios: Transferable NAS and Bayesian Optimization (BO)-based NAS.
When integrated into a BO-based algorithm, DiffusionNAG outperforms existing BO-based NAS approaches, particularly in the large MobileNetV3 search space on the ImageNet 1K dataset.
arXiv Detail & Related papers (2023-05-26T13:58:18Z) - Lightweight Neural Architecture Search for Temporal Convolutional
Networks at the Edge [21.72253397805102]
This work focuses in particular on Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing.
We propose the first NAS tool that explicitly targets the optimization of the most peculiar architectural parameters of TCNs.
We test the proposed NAS on four real-world, edge-relevant tasks, involving audio and bio-signals.
arXiv Detail & Related papers (2023-01-24T19:47:40Z) - HARL: Hierarchical Adaptive Reinforcement Learning Based Auto Scheduler
for Neural Networks [51.71682428015139]
We propose HARL, a reinforcement learning-based auto-scheduler for efficient tensor program exploration.
HarL improves the tensor operator performance by 22% and the search speed by 4.3x compared to the state-of-the-art auto-scheduler.
Inference performance and search speed are also significantly improved on end-to-end neural networks.
arXiv Detail & Related papers (2022-11-21T04:15:27Z) - FreeREA: Training-Free Evolution-based Architecture Search [17.202375422110553]
FreeREA is a custom cell-based evolution NAS algorithm that exploits an optimised combination of training-free metrics to rank architectures.
Our experiments, carried out on the common benchmarks NAS-Bench-101 and NATS-Bench, demonstrate that i) FreeREA is a fast, efficient, and effective search method for models automatic design.
arXiv Detail & Related papers (2022-06-17T11:16:28Z) - Efficient Model Performance Estimation via Feature Histories [27.008927077173553]
An important step in the task of neural network design is the evaluation of a model's performance.
In this work, we use the evolution history of features of a network during the early stages of training to build a proxy classifier.
We show that our method can be combined with multiple search algorithms to find better solutions to a wide range of tasks.
arXiv Detail & Related papers (2021-03-07T20:41:57Z) - AttentiveNAS: Improving Neural Architecture Search via Attentive
Sampling [39.58754758581108]
Two-stage Neural Architecture Search (NAS) achieves remarkable accuracy and efficiency.
Two-stage NAS requires sampling from the search space during training, which directly impacts the accuracy of the final searched models.
We propose AttentiveNAS that focuses on improving the sampling strategy to achieve better performance Pareto.
Our discovered model family, AttentiveNAS models, achieves top-1 accuracy from 77.3% to 80.7% on ImageNet, and outperforms SOTA models, including BigNAS and Once-for-All networks.
arXiv Detail & Related papers (2020-11-18T00:15:23Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.