Neural Architecture Search as Multiobjective Optimization Benchmarks:
Problem Formulation and Performance Assessment
- URL: http://arxiv.org/abs/2208.04321v2
- Date: Tue, 18 Apr 2023 14:32:30 GMT
- Title: Neural Architecture Search as Multiobjective Optimization Benchmarks:
Problem Formulation and Performance Assessment
- Authors: Zhichao Lu, Ran Cheng, Yaochu Jin, Kay Chen Tan, and Kalyanmoy Deb
- Abstract summary: We formulate neural architecture search (NAS) tasks into general multi-objective optimization problems.
We analyze the complex characteristics from an optimization point of view.
We present an end-to-end pipeline, dubbed $texttEvoXBench$, to generate benchmark test problems for EMO algorithms to run efficiently.
- Score: 30.264524448340406
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ongoing advancements in network architecture design have led to
remarkable achievements in deep learning across various challenging computer
vision tasks. Meanwhile, the development of neural architecture search (NAS)
has provided promising approaches to automating the design of network
architectures for lower prediction error. Recently, the emerging application
scenarios of deep learning have raised higher demands for network architectures
considering multiple design criteria: number of parameters/floating-point
operations, and inference latency, among others. From an optimization point of
view, the NAS tasks involving multiple design criteria are intrinsically
multiobjective optimization problems; hence, it is reasonable to adopt
evolutionary multiobjective optimization (EMO) algorithms for tackling them.
Nonetheless, there is still a clear gap confining the related research along
this pathway: on the one hand, there is a lack of a general problem formulation
of NAS tasks from an optimization point of view; on the other hand, there are
challenges in conducting benchmark assessments of EMO algorithms on NAS tasks.
To bridge the gap: (i) we formulate NAS tasks into general multi-objective
optimization problems and analyze the complex characteristics from an
optimization point of view; (ii) we present an end-to-end pipeline, dubbed
$\texttt{EvoXBench}$, to generate benchmark test problems for EMO algorithms to
run efficiently -- without the requirement of GPUs or Pytorch/Tensorflow; (iii)
we instantiate two test suites comprehensively covering two datasets, seven
search spaces, and three hardware devices, involving up to eight objectives.
Based on the above, we validate the proposed test suites using six
representative EMO algorithms and provide some empirical analyses. The code of
$\texttt{EvoXBench}$ is available from
$\href{https://github.com/EMI-Group/EvoXBench}{\rm{here}}$.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - A Survey on Multi-Objective Neural Architecture Search [9.176056742068813]
Multi-Objective Neural architecture Search (MONAS) has been attracting attentions.
We present an overview of principal and state-of-the-art works in the field of MONAS.
arXiv Detail & Related papers (2023-07-18T09:42:51Z) - OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - Surrogate-assisted Multi-objective Neural Architecture Search for
Real-time Semantic Segmentation [11.866947846619064]
neural architecture search (NAS) has emerged as a promising avenue toward automating the design of architectures.
We propose a surrogate-assisted multi-objective method to address the challenges of applying NAS to semantic segmentation.
Our method can identify architectures significantly outperforming existing state-of-the-art architectures designed both manually by human experts and automatically by other NAS methods.
arXiv Detail & Related papers (2022-08-14T10:18:51Z) - Arch-Graph: Acyclic Architecture Relation Predictor for
Task-Transferable Neural Architecture Search [96.31315520244605]
Arch-Graph is a transferable NAS method that predicts task-specific optimal architectures.
We show Arch-Graph's transferability and high sample efficiency across numerous tasks.
It is able to find top 0.16% and 0.29% architectures on average on two search spaces under the budget of only 50 models.
arXiv Detail & Related papers (2022-04-12T16:46:06Z) - Learning Interpretable Models Through Multi-Objective Neural
Architecture Search [0.9990687944474739]
We propose a framework to optimize for both task performance and "introspectability," a surrogate metric for aspects of interpretability.
We demonstrate that jointly optimizing for task error and introspectability leads to more disentangled and debuggable architectures that perform within error.
arXiv Detail & Related papers (2021-12-16T05:50:55Z) - Elastic Architecture Search for Diverse Tasks with Different Resources [87.23061200971912]
We study a new challenging problem of efficient deployment for diverse tasks with different resources, where the resource constraint and task of interest corresponding to a group of classes are dynamically specified at testing time.
Previous NAS approaches seek to design architectures for all classes simultaneously, which may not be optimal for some individual tasks.
We present a novel and general framework, called Elastic Architecture Search (EAS), permitting instant specializations at runtime for diverse tasks with various resource constraints.
arXiv Detail & Related papers (2021-08-03T00:54:27Z) - Effective, Efficient and Robust Neural Architecture Search [4.273005643715522]
Recent advances in adversarial attacks show the vulnerability of deep neural networks searched by Neural Architecture Search (NAS)
We propose an Effective, Efficient, and Robust Neural Architecture Search (E2RNAS) method to search a neural network architecture by taking the performance, robustness, and resource constraint into consideration.
Experiments on benchmark datasets show that the proposed E2RNAS method can find adversarially robust architectures with optimized model size and comparable classification accuracy.
arXiv Detail & Related papers (2020-11-19T13:46:23Z) - Neural Architecture Search with an Efficient Multiobjective Evolutionary
Framework [0.0]
We propose EMONAS, an Efficient MultiObjective Neural Architecture Search framework.
EMONAS is composed of a search space that considers both the macro- and micro-structure of the architecture.
It is evaluated on the task of 3D cardiac segmentation from the MICCAI ACDC challenge.
arXiv Detail & Related papers (2020-11-09T14:41:10Z) - MTL-NAS: Task-Agnostic Neural Architecture Search towards
General-Purpose Multi-Task Learning [71.90902837008278]
We propose to incorporate neural architecture search (NAS) into general-purpose multi-task learning (GP-MTL)
In order to adapt to different task combinations, we disentangle the GP-MTL networks into single-task backbones.
We also propose a novel single-shot gradient-based search algorithm that closes the performance gap between the searched architectures.
arXiv Detail & Related papers (2020-03-31T09:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.