Resource-Constrained Neural Architecture Search on Tabular Datasets
- URL: http://arxiv.org/abs/2204.07615v1
- Date: Fri, 15 Apr 2022 19:03:25 GMT
- Title: Resource-Constrained Neural Architecture Search on Tabular Datasets
- Authors: Chengrun Yang, Gabriel Bender, Hanxiao Liu, Pieter-Jan Kindermans,
Madeleine Udell, Yifeng Lu, Quoc Le, Da Huang
- Abstract summary: The best neural architecture for a given machine learning problem depends on many factors, including the complexity and structure of the dataset.
Previous NAS algorithms incorporate resource constraints directly into the reinforcement learning rewards.
We propose a new reinforcement learning controller to address these challenges.
- Score: 38.765317261872504
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The best neural architecture for a given machine learning problem depends on
many factors: not only the complexity and structure of the dataset, but also on
resource constraints including latency, compute, energy consumption, etc.
Neural architecture search (NAS) for tabular datasets is an important but
under-explored problem. Previous NAS algorithms designed for image search
spaces incorporate resource constraints directly into the reinforcement
learning rewards. In this paper, we argue that search spaces for tabular NAS
pose considerable challenges for these existing reward-shaping methods, and
propose a new reinforcement learning (RL) controller to address these
challenges. Motivated by rejection sampling, when we sample candidate
architectures during a search, we immediately discard any architecture that
violates our resource constraints. We use a Monte-Carlo-based correction to our
RL policy gradient update to account for this extra filtering step. Results on
several tabular datasets show TabNAS, the proposed approach, efficiently finds
high-quality models that satisfy the given resource constraints.
Related papers
- Scalable Reinforcement Learning-based Neural Architecture Search [0.0]
We assess the ability of a novel Reinforcement Learning-based solution to the problem of Neural Architecture Search.
We consider both the NAS-Bench-101 and NAS- Bench-301 settings, and compare against various known strong baselines, such as local search and random search.
arXiv Detail & Related papers (2024-10-02T11:31:48Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - Defining Neural Network Architecture through Polytope Structures of Dataset [53.512432492636236]
This paper defines upper and lower bounds for neural network widths, which are informed by the polytope structure of the dataset in question.
We develop an algorithm to investigate a converse situation where the polytope structure of a dataset can be inferred from its corresponding trained neural networks.
It is established that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be efficiently encapsulated using no more than two polytopes with a small number of faces.
arXiv Detail & Related papers (2024-02-04T08:57:42Z) - Masked Autoencoders Are Robust Neural Architecture Search Learners [14.965550562292476]
We propose a novel NAS framework based on Masked Autoencoders (MAE) that eliminates the need for labeled data during the search process.
By replacing the supervised learning objective with an image reconstruction task, our approach enables the robust discovery of network architectures.
arXiv Detail & Related papers (2023-11-20T13:45:21Z) - Towards Less Constrained Macro-Neural Architecture Search [2.685668802278155]
Neural Architecture Search (NAS) networks achieve state-of-the-art performance in a variety of tasks.
Most NAS methods rely heavily on human-defined assumptions that constrain the search.
We present experiments showing that LCMNAS generates state-of-the-art architectures from scratch with minimal GPU computation.
arXiv Detail & Related papers (2022-03-10T17:53:03Z) - $\beta$-DARTS: Beta-Decay Regularization for Differentiable Architecture
Search [85.84110365657455]
We propose a simple-but-efficient regularization method, termed as Beta-Decay, to regularize the DARTS-based NAS searching process.
Experimental results on NAS-Bench-201 show that our proposed method can help to stabilize the searching process and makes the searched network more transferable across different datasets.
arXiv Detail & Related papers (2022-03-03T11:47:14Z) - D-DARTS: Distributed Differentiable Architecture Search [75.12821786565318]
Differentiable ARchiTecture Search (DARTS) is one of the most trending Neural Architecture Search (NAS) methods.
We propose D-DARTS, a novel solution that addresses this problem by nesting several neural networks at cell-level.
arXiv Detail & Related papers (2021-08-20T09:07:01Z) - RC-DARTS: Resource Constrained Differentiable Architecture Search [162.7199952019152]
We propose the resource constrained differentiable architecture search (RC-DARTS) method to learn architectures that are significantly smaller and faster.
We show that the RC-DARTS method learns lightweight neural architectures which have smaller model size and lower computational complexity.
arXiv Detail & Related papers (2019-12-30T05:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.