HyperSegNAS: Bridging One-Shot Neural Architecture Search with 3D
Medical Image Segmentation using HyperNet
- URL: http://arxiv.org/abs/2112.10652v1
- Date: Mon, 20 Dec 2021 16:21:09 GMT
- Title: HyperSegNAS: Bridging One-Shot Neural Architecture Search with 3D
Medical Image Segmentation using HyperNet
- Authors: Cheng Peng, Andriy Myronenko, Ali Hatamizadeh, Vish Nath, Md Mahfuzur
Rahman Siddiquee, Yufan He, Daguang Xu, Rama Chellappa, Dong Yang
- Abstract summary: We introduce HyperSegNAS to enable one-shot Neural Architecture Search (NAS) for medical image segmentation.
We show that HyperSegNAS yields better performing and more intuitive architectures compared to the previous state-of-the-art (SOTA) segmentation networks.
Our method is evaluated on public datasets from the Medical Decathlon (MSD) challenge, and achieves SOTA performances.
- Score: 51.60655410423093
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic segmentation of 3D medical images is a challenging task due to the
high variability of the shape and pattern of objects (such as organs or
tumors). Given the recent success of deep learning in medical image
segmentation, Neural Architecture Search (NAS) has been introduced to find
high-performance 3D segmentation network architectures. However, because of the
massive computational requirements of 3D data and the discrete optimization
nature of architecture search, previous NAS methods require a long search time
or necessary continuous relaxation, and commonly lead to sub-optimal network
architectures. While one-shot NAS can potentially address these disadvantages,
its application in the segmentation domain has not been well studied in the
expansive multi-scale multi-path search space. To enable one-shot NAS for
medical image segmentation, our method, named HyperSegNAS, introduces a
HyperNet to assist super-net training by incorporating architecture topology
information. Such a HyperNet can be removed once the super-net is trained and
introduces no overhead during architecture search. We show that HyperSegNAS
yields better performing and more intuitive architectures compared to the
previous state-of-the-art (SOTA) segmentation networks; furthermore, it can
quickly and accurately find good architecture candidates under different
computing constraints. Our method is evaluated on public datasets from the
Medical Segmentation Decathlon (MSD) challenge, and achieves SOTA performances.
Related papers
- DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - HKNAS: Classification of Hyperspectral Imagery Based on Hyper Kernel
Neural Architecture Search [104.45426861115972]
We propose to directly generate structural parameters by utilizing the specifically designed hyper kernels.
We obtain three kinds of networks to separately conduct pixel-level or image-level classifications with 1-D or 3-D convolutions.
A series of experiments on six public datasets demonstrate that the proposed methods achieve state-of-the-art results.
arXiv Detail & Related papers (2023-04-23T17:27:40Z) - UnrealNAS: Can We Search Neural Architectures with Unreal Data? [84.78460976605425]
Neural architecture search (NAS) has shown great success in the automatic design of deep neural networks (DNNs)
Previous work has analyzed the necessity of having ground-truth labels in NAS and inspired broad interest.
We take a further step to question whether real data is necessary for NAS to be effective.
arXiv Detail & Related papers (2022-05-04T16:30:26Z) - Mixed-Block Neural Architecture Search for Medical Image Segmentation [0.0]
We propose a novel NAS search space for medical image segmentation networks.
It combines the strength of a generalised encoder-decoder structure, well known from U-Net, with network blocks that have proven to have a strong performance in image classification tasks.
We find that the networks discovered by our proposed NAS method have better performance than well-known handcrafted segmentation networks.
arXiv Detail & Related papers (2022-02-23T10:32:35Z) - BiX-NAS: Searching Efficient Bi-directional Architecture for Medical
Image Segmentation [85.0444711725392]
We study a multi-scale upgrade of a bi-directional skip connected network and then automatically discover an efficient architecture by a novel two-phase Neural Architecture Search (NAS) algorithm, namely BiX-NAS.
Our proposed method reduces the network computational cost by sifting out ineffective multi-scale features at different levels and iterations.
We evaluate BiX-NAS on two segmentation tasks using three different medical image datasets, and the experimental results show that our BiX-NAS searched architecture achieves the state-of-the-art performance with significantly lower computational cost.
arXiv Detail & Related papers (2021-06-26T14:33:04Z) - DiNTS: Differentiable Neural Network Topology Search for 3D Medical
Image Segmentation [7.003867673687463]
Differentiable Network Topology Search scheme (DiNTS) is evaluated on the Medical Decathlon (MSD) challenge.
Our method achieves the state-of-the-art performance and the top ranking on the MSD challenge leaderboard.
arXiv Detail & Related papers (2021-03-29T21:02:42Z) - Memory-Efficient Hierarchical Neural Architecture Search for Image
Restoration [68.6505473346005]
We propose a memory-efficient hierarchical NAS HiNAS (HiNAS) for image denoising and image super-resolution tasks.
With a single GTX1080Ti GPU, it takes only about 1 hour for searching for denoising network on BSD 500 and 3.5 hours for searching for the super-resolution structure on DIV2K.
arXiv Detail & Related papers (2020-12-24T12:06:17Z) - FNA++: Fast Network Adaptation via Parameter Remapping and Architecture
Search [35.61441231491448]
We propose a Fast Network Adaptation (FNA++) method, which can adapt both the architecture and parameters of a seed network.
In our experiments, we apply FNA++ on MobileNetV2 to obtain new networks for semantic segmentation, object detection, and human pose estimation.
The total computation cost of FNA++ is significantly less than SOTA segmentation and detection NAS approaches.
arXiv Detail & Related papers (2020-06-21T10:03:34Z) - DCNAS: Densely Connected Neural Architecture Search for Semantic Image
Segmentation [44.46852065566759]
We propose a Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information.
Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs.
We demonstrate that the architecture obtained from our DCNAS algorithm achieves state-of-the-art performances on public semantic image segmentation benchmarks.
arXiv Detail & Related papers (2020-03-26T13:21:33Z) - Fast Neural Network Adaptation via Parameter Remapping and Architecture
Search [35.61441231491448]
Deep neural networks achieve remarkable performance in many computer vision tasks.
Most state-of-the-art (SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone.
One major challenge though, is that ImageNet pre-training of the search space representation incurs huge computational cost.
In this paper, we propose a Fast Neural Network Adaptation (FNA) method, which can adapt both the architecture and parameters of a seed network.
arXiv Detail & Related papers (2020-01-08T13:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.