MTL-NAS: Task-Agnostic Neural Architecture Search towards
General-Purpose Multi-Task Learning
- URL: http://arxiv.org/abs/2003.14058v1
- Date: Tue, 31 Mar 2020 09:49:14 GMT
- Title: MTL-NAS: Task-Agnostic Neural Architecture Search towards
General-Purpose Multi-Task Learning
- Authors: Yuan Gao, Haoping Bai, Zequn Jie, Jiayi Ma, Kui Jia, and Wei Liu
- Abstract summary: We propose to incorporate neural architecture search (NAS) into general-purpose multi-task learning (GP-MTL)
In order to adapt to different task combinations, we disentangle the GP-MTL networks into single-task backbones.
We also propose a novel single-shot gradient-based search algorithm that closes the performance gap between the searched architectures.
- Score: 71.90902837008278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose to incorporate neural architecture search (NAS) into
general-purpose multi-task learning (GP-MTL). Existing NAS methods typically
define different search spaces according to different tasks. In order to adapt
to different task combinations (i.e., task sets), we disentangle the GP-MTL
networks into single-task backbones (optionally encode the task priors), and a
hierarchical and layerwise features sharing/fusing scheme across them. This
enables us to design a novel and general task-agnostic search space, which
inserts cross-task edges (i.e., feature fusion connections) into fixed
single-task network backbones. Moreover, we also propose a novel single-shot
gradient-based search algorithm that closes the performance gap between the
searched architectures and the final evaluation architecture. This is realized
with a minimum entropy regularization on the architecture weights during the
search phase, which makes the architecture weights converge to near-discrete
values and therefore achieves a single model. As a result, our searched model
can be directly used for evaluation without (re-)training from scratch. We
perform extensive experiments using different single-task backbones on various
task sets, demonstrating the promising performance obtained by exploiting the
hierarchical and layerwise features, as well as the desirable generalizability
to different i) task sets and ii) single-task backbones. The code of our paper
is available at https://github.com/bhpfelix/MTLNAS.
Related papers
- Aux-NAS: Exploiting Auxiliary Labels with Negligibly Extra Inference Cost [73.28626942658022]
We aim at exploiting additional auxiliary labels from an independent (auxiliary) task to boost the primary task performance.
Our method is architecture-based with a flexible asymmetric structure for the primary and auxiliary tasks.
Experiments with six tasks on NYU v2, CityScapes, and Taskonomy datasets using VGG, ResNet, and ViT backbones validate the promising performance.
arXiv Detail & Related papers (2024-05-09T11:50:19Z) - OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - MSINet: Twins Contrastive Search of Multi-Scale Interaction for Object
ReID [29.13844433114534]
We propose a novel Twins Contrastive Mechanism (TCM) to provide more appropriate supervision for ReID architecture search.
TCM reduces the category overlaps between the training and validation data, and assists NAS in simulating real-world ReID training schemes.
We then design a Multi-Scale Interaction (MSI) search space to search for rational interaction operations between multi-scale features.
arXiv Detail & Related papers (2023-03-13T12:39:59Z) - Warm-starting DARTS using meta-learning [4.035753155957698]
Neural architecture search (NAS) has shown great promise in the field of automated machine learning (AutoML)
We present a meta-learning framework to warm-start Differentiable architecture search (DARTS)
arXiv Detail & Related papers (2022-05-12T20:40:26Z) - Arch-Graph: Acyclic Architecture Relation Predictor for
Task-Transferable Neural Architecture Search [96.31315520244605]
Arch-Graph is a transferable NAS method that predicts task-specific optimal architectures.
We show Arch-Graph's transferability and high sample efficiency across numerous tasks.
It is able to find top 0.16% and 0.29% architectures on average on two search spaces under the budget of only 50 models.
arXiv Detail & Related papers (2022-04-12T16:46:06Z) - Across-Task Neural Architecture Search via Meta Learning [1.225795556154044]
Adequate labeled data and expensive compute resources are the prerequisites for the success of neural architecture search(NAS)
It is challenging to apply NAS in meta-learning scenarios with limited compute resources and data.
In this paper, an across-task neural architecture search (AT-NAS) is proposed to address the problem through combining gradient-based meta-learning with EA-based NAS.
arXiv Detail & Related papers (2021-10-12T09:07:33Z) - Elastic Architecture Search for Diverse Tasks with Different Resources [87.23061200971912]
We study a new challenging problem of efficient deployment for diverse tasks with different resources, where the resource constraint and task of interest corresponding to a group of classes are dynamically specified at testing time.
Previous NAS approaches seek to design architectures for all classes simultaneously, which may not be optimal for some individual tasks.
We present a novel and general framework, called Elastic Architecture Search (EAS), permitting instant specializations at runtime for diverse tasks with various resource constraints.
arXiv Detail & Related papers (2021-08-03T00:54:27Z) - Neural Architecture Search From Fr\'echet Task Distance [50.9995960884133]
We show how the distance between a target task and each task in a given set of baseline tasks can be used to reduce the neural architecture search space for the target task.
The complexity reduction in search space for task-specific architectures is achieved by building on the optimized architectures for similar tasks instead of doing a full search without using this side information.
arXiv Detail & Related papers (2021-03-23T20:43:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.