POPNASv3: a Pareto-Optimal Neural Architecture Search Solution for Image
and Time Series Classification
- URL: http://arxiv.org/abs/2212.06735v1
- Date: Tue, 13 Dec 2022 17:14:14 GMT
- Title: POPNASv3: a Pareto-Optimal Neural Architecture Search Solution for Image
and Time Series Classification
- Authors: Andrea Falanti, Eugenio Lomurno, Danilo Ardagna and Matteo Matteucci
- Abstract summary: This article presents the third version of a sequential model-based NAS algorithm targeting different hardware environments and multiple classification tasks.
Our method is able to find competitive architectures within large search spaces, while keeping a flexible structure and data processing pipeline to adapt to different tasks.
The experiments performed on images and time series classification datasets provide evidence that POPNASv3 can explore a large set of assorted operators and converge to optimal architectures suited for the type of data provided under different scenarios.
- Score: 8.190723030003804
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The automated machine learning (AutoML) field has become increasingly
relevant in recent years. These algorithms can develop models without the need
for expert knowledge, facilitating the application of machine learning
techniques in the industry. Neural Architecture Search (NAS) exploits deep
learning techniques to autonomously produce neural network architectures whose
results rival the state-of-the-art models hand-crafted by AI experts. However,
this approach requires significant computational resources and hardware
investments, making it less appealing for real-usage applications. This article
presents the third version of Pareto-Optimal Progressive Neural Architecture
Search (POPNASv3), a new sequential model-based optimization NAS algorithm
targeting different hardware environments and multiple classification tasks.
Our method is able to find competitive architectures within large search
spaces, while keeping a flexible structure and data processing pipeline to
adapt to different tasks. The algorithm employs Pareto optimality to reduce the
number of architectures sampled during the search, drastically improving the
time efficiency without loss in accuracy. The experiments performed on images
and time series classification datasets provide evidence that POPNASv3 can
explore a large set of assorted operators and converge to optimal architectures
suited for the type of data provided under different scenarios.
Related papers
- EM-DARTS: Hierarchical Differentiable Architecture Search for Eye Movement Recognition [54.99121380536659]
Eye movement biometrics have received increasing attention thanks to its high secure identification.
Deep learning (DL) models have been recently successfully applied for eye movement recognition.
DL architecture still is determined by human prior knowledge.
We propose EM-DARTS, a hierarchical differentiable architecture search algorithm to automatically design the DL architecture for eye movement recognition.
arXiv Detail & Related papers (2024-09-22T13:11:08Z) - Combining Neural Architecture Search and Automatic Code Optimization: A Survey [0.8796261172196743]
Two notable techniques are Hardware-aware Neural Architecture Search (HW-NAS) and Automatic Code Optimization (ACO)
HW-NAS automatically designs accurate yet hardware-friendly neural networks, while ACO involves searching for the best compiler optimizations to apply on neural networks.
This survey explores recent works that combine these two techniques within a single framework.
arXiv Detail & Related papers (2024-08-07T22:40:05Z) - HKNAS: Classification of Hyperspectral Imagery Based on Hyper Kernel
Neural Architecture Search [104.45426861115972]
We propose to directly generate structural parameters by utilizing the specifically designed hyper kernels.
We obtain three kinds of networks to separately conduct pixel-level or image-level classifications with 1-D or 3-D convolutions.
A series of experiments on six public datasets demonstrate that the proposed methods achieve state-of-the-art results.
arXiv Detail & Related papers (2023-04-23T17:27:40Z) - Pareto-aware Neural Architecture Generation for Diverse Computational
Budgets [94.27982238384847]
Existing methods often perform an independent architecture search process for each target budget.
We propose a Neural Architecture Generator (PNAG) which only needs to be trained once and dynamically produces the optimal architecture for any given budget via inference.
Such a joint search algorithm not only greatly reduces the overall search cost but also improves the results.
arXiv Detail & Related papers (2022-10-14T08:30:59Z) - POPNASv2: An Efficient Multi-Objective Neural Architecture Search
Technique [7.497722345725035]
This paper proposes a new version of the Pareto-optimal Progressive Neural Architecture Search, called POPNASv2.
Our approach enhances its first version and improves its performance.
Our efforts allow POPNASv2 to achieve PNAS-like performance with an average 4x factor search time speed-up.
arXiv Detail & Related papers (2022-10-06T14:51:54Z) - Surrogate-assisted Multi-objective Neural Architecture Search for
Real-time Semantic Segmentation [11.866947846619064]
neural architecture search (NAS) has emerged as a promising avenue toward automating the design of architectures.
We propose a surrogate-assisted multi-objective method to address the challenges of applying NAS to semantic segmentation.
Our method can identify architectures significantly outperforming existing state-of-the-art architectures designed both manually by human experts and automatically by other NAS methods.
arXiv Detail & Related papers (2022-08-14T10:18:51Z) - FreeREA: Training-Free Evolution-based Architecture Search [17.202375422110553]
FreeREA is a custom cell-based evolution NAS algorithm that exploits an optimised combination of training-free metrics to rank architectures.
Our experiments, carried out on the common benchmarks NAS-Bench-101 and NATS-Bench, demonstrate that i) FreeREA is a fast, efficient, and effective search method for models automatic design.
arXiv Detail & Related papers (2022-06-17T11:16:28Z) - Learning Interpretable Models Through Multi-Objective Neural
Architecture Search [0.9990687944474739]
We propose a framework to optimize for both task performance and "introspectability," a surrogate metric for aspects of interpretability.
We demonstrate that jointly optimizing for task error and introspectability leads to more disentangled and debuggable architectures that perform within error.
arXiv Detail & Related papers (2021-12-16T05:50:55Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.