Prioritized Subnet Sampling for Resource-Adaptive Supernet Training
- URL: http://arxiv.org/abs/2109.05432v1
- Date: Sun, 12 Sep 2021 04:43:51 GMT
- Title: Prioritized Subnet Sampling for Resource-Adaptive Supernet Training
- Authors: Bohong Chen, Mingbao Lin, Liujuan Cao, Jianzhuang Liu, Qixiang Ye,
Baochang Zhang, Wei Zeng, Yonghong Tian, Rongrong Ji
- Abstract summary: We propose Prioritized Subnet Sampling to train a resource-adaptive supernet, termed PSS-Net.
Experiments on ImageNet using MobileNetV1/V2 show that our PSS-Net can well outperform state-of-the-art resource-adaptive supernets.
- Score: 136.6591624918964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A resource-adaptive supernet adjusts its subnets for inference to fit the
dynamically available resources. In this paper, we propose Prioritized Subnet
Sampling to train a resource-adaptive supernet, termed PSS-Net. We maintain
multiple subnet pools, each of which stores the information of substantial
subnets with similar resource consumption. Considering a resource constraint,
subnets conditioned on this resource constraint are sampled from a pre-defined
subnet structure space and high-quality ones will be inserted into the
corresponding subnet pool. Then, the sampling will gradually be prone to
sampling subnets from the subnet pools. Moreover, the one with a better
performance metric is assigned with higher priority to train our PSS-Net, if
sampling is from a subnet pool. At the end of training, our PSS-Net retains the
best subnet in each pool to entitle a fast switch of high-quality subnets for
inference when the available resources vary. Experiments on ImageNet using
MobileNetV1/V2 show that our PSS-Net can well outperform state-of-the-art
resource-adaptive supernets. Our project is at
https://github.com/chenbong/PSS-Net.
Related papers
- PSE-Net: Channel Pruning for Convolutional Neural Networks with Parallel-subnets Estimator [16.698190973547362]
We introduce PSE-Net, a novel parallel-subnets estimator for efficient channel pruning.
Our proposed algorithm facilitates the efficiency of supernet training.
We develop a prior-distributed-based sampling algorithm to boost the performance of classical evolutionary search.
arXiv Detail & Related papers (2024-08-29T03:20:43Z) - A Distributed Neural Linear Thompson Sampling Framework to Achieve URLLC
in Industrial IoT [16.167107624956294]
Industrial Internet of Things (IIoT) networks will provide Ultra-Reliable Low-Latency Communication (URLLC) to support critical processes.
Standard protocols for allocating wireless resources may not optimize the latency-reliability trade-off, especially for uplink communication.
arXiv Detail & Related papers (2023-11-21T12:22:04Z) - Boosting Residual Networks with Group Knowledge [75.73793561417702]
Recent research understands the residual networks from a new perspective of the implicit ensemble model.
Previous methods such as depth and stimulative training have further improved the performance of the residual network by sampling and training of itss.
We propose a group knowledge based training framework for boosting the performance of residual networks.
arXiv Detail & Related papers (2023-08-26T05:39:57Z) - ShiftNAS: Improving One-shot NAS via Probability Shift [1.3537414663819973]
We propose ShiftNAS, a method that can adjust the sampling probability based on the complexity of networks.
We evaluate our approach on multiple visual network models, including convolutional neural networks (CNNs) and vision transformers (ViTs)
Experimental results on ImageNet show that ShiftNAS can improve the performance of one-shot NAS without additional consumption.
arXiv Detail & Related papers (2023-07-17T07:53:23Z) - Prior-Guided One-shot Neural Architecture Search [11.609732776776982]
We present Prior-Guided One-shot NAS (PGONAS) to strengthen the ranking correlation of supernets.
Our PGONAS ranks 3rd place in the supernet Track Track of CVPR2022 Second lightweight NAS challenge.
arXiv Detail & Related papers (2022-06-27T14:19:56Z) - Searching for Network Width with Bilaterally Coupled Network [75.43658047510334]
We introduce a new supernet called Bilaterally Coupled Network (BCNet) to address this issue.
In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.
We propose the first open-source width benchmark on macro structures named Channel-Bench-Macro for the better comparison of width search algorithms.
arXiv Detail & Related papers (2022-03-25T15:32:46Z) - Multi-Source Domain Adaptation for Object Detection [52.87890831055648]
We propose a unified Faster R-CNN based framework, termed Divide-and-Merge Spindle Network (DMSN)
DMSN can simultaneously enhance domain innative and preserve discriminative power.
We develop a novel pseudo learning algorithm to approximate optimal parameters of pseudo target subset.
arXiv Detail & Related papers (2021-06-30T03:17:20Z) - BCNet: Searching for Network Width with Bilaterally Coupled Network [56.14248440683152]
We introduce a new supernet called Bilaterally Coupled Network (BCNet) to address this issue.
In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.
Our method achieves state-of-the-art or competing performance over other baseline methods.
arXiv Detail & Related papers (2021-05-21T18:54:03Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.