OPANAS: One-Shot Path Aggregation Network Architecture Search for Object
Detection
- URL: http://arxiv.org/abs/2103.04507v3
- Date: Thu, 11 Mar 2021 05:17:08 GMT
- Title: OPANAS: One-Shot Path Aggregation Network Architecture Search for Object
Detection
- Authors: Tingting Liang, Yongtao Wang, Zhi Tang, Guosheng Hu, Haibin Ling
- Abstract summary: Recently, neural architecture search (NAS) has been exploited to design feature pyramid networks (FPNs)
We propose a novel One-Shot Path Aggregation Network Architecture Search (OPANAS) algorithm, which significantly improves both searching efficiency and detection accuracy.
- Score: 82.04372532783931
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Recently, neural architecture search (NAS) has been exploited to design
feature pyramid networks (FPNs) and achieved promising results for visual
object detection. Encouraged by the success, we propose a novel One-Shot Path
Aggregation Network Architecture Search (OPANAS) algorithm, which significantly
improves both searching efficiency and detection accuracy. Specifically, we
first introduce six heterogeneous information paths to build our search space,
namely top-down, bottom-up, fusing-splitting, scale-equalizing, skip-connect
and none. Second, we propose a novel search space of FPNs, in which each FPN
candidate is represented by a densely-connected directed acyclic graph (each
node is a feature pyramid and each edge is one of the six heterogeneous
information paths). Third, we propose an efficient one-shot search method to
find the optimal path aggregation architecture, that is, we first train a
super-net and then find the optimal candidate with an evolutionary algorithm.
Experimental results demonstrate the efficacy of the proposed OPANAS for object
detection: (1) OPANAS is more efficient than state-of-the-art methods (e.g.,
NAS-FPN and Auto-FPN), at significantly smaller searching cost (e.g., only 4
GPU days on MS-COCO); (2) the optimal architecture found by OPANAS
significantly improves main-stream detectors including RetinaNet, Faster R-CNN
and Cascade R-CNN, by 2.3-3.2 % mAP comparing to their FPN counterparts; and
(3) a new state-of-the-art accuracy-speed trade-off (52.2 % mAP at 7.6 FPS) at
smaller training costs than comparable state-of-the-arts. Code will be released
at https://github.com/VDIGPKU/OPANAS.
Related papers
- A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - When NAS Meets Trees: An Efficient Algorithm for Neural Architecture
Search [117.89827740405694]
Key challenge in neural architecture search (NAS) is designing how to explore wisely in the huge search space.
We propose a new NAS method called TNAS (NAS with trees), which improves search efficiency by exploring only a small number of architectures.
TNAS finds the global optimal architecture on CIFAR-10 with test accuracy of 94.37% in four GPU hours in NAS-Bench-201.
arXiv Detail & Related papers (2022-04-11T07:34:21Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - Trilevel Neural Architecture Search for Efficient Single Image
Super-Resolution [127.92235484598811]
This paper proposes a trilevel neural architecture search (NAS) method for efficient single image super-resolution (SR)
For modeling the discrete search space, we apply a new continuous relaxation on the discrete search spaces to build a hierarchical mixture of network-path, cell-operations, and kernel-width.
An efficient search algorithm is proposed to perform optimization in a hierarchical supernet manner.
arXiv Detail & Related papers (2021-01-17T12:19:49Z) - Fine-Grained Stochastic Architecture Search [6.277767522867666]
Fine-Grained Architecture Search (FiGS) is a differentiable search method that searches over a much larger set of candidate architectures.
FiGS simultaneously selects and modifies operators in the search space by applying a structured sparse regularization penalty.
We show results across 3 existing search spaces, matching or outperforming the original search algorithms.
arXiv Detail & Related papers (2020-06-17T01:04:14Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z) - PONAS: Progressive One-shot Neural Architecture Search for Very
Efficient Deployment [9.442139459221783]
We propose Progressive One-shot Neural Architecture Search (PONAS) that combines advantages of progressive NAS and one-shot methods.
PONAS is able to find architecture of a specialized network in around 10 seconds.
In ImageNet classification, 75.2% top-1 accuracy can be obtained, which is comparable with the state of the arts.
arXiv Detail & Related papers (2020-03-11T05:00:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.