UXNet: Searching Multi-level Feature Aggregation for 3D Medical Image
Segmentation
- URL: http://arxiv.org/abs/2009.07501v1
- Date: Wed, 16 Sep 2020 06:50:57 GMT
- Title: UXNet: Searching Multi-level Feature Aggregation for 3D Medical Image
Segmentation
- Authors: Yuanfeng Ji, Ruimao Zhang, Zhen Li, Jiamin Ren, Shaoting Zhang, Ping
Luo
- Abstract summary: This paper proposes a novel NAS method for 3D medical image segmentation, named UXNet.
UXNet searches both the scale-wise feature aggregation strategies as well as the block-wise operators in the encoder-decoder network.
The architecture discovered by UXNet outperforms existing state-of-the-art models in terms of Dice on several public 3D medical image segmentation benchmarks.
- Score: 34.8581851257193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aggregating multi-level feature representation plays a critical role in
achieving robust volumetric medical image segmentation, which is important for
the auxiliary diagnosis and treatment. Unlike the recent neural architecture
search (NAS) methods that typically searched the optimal operators in each
network layer, but missed a good strategy to search for feature aggregations,
this paper proposes a novel NAS method for 3D medical image segmentation, named
UXNet, which searches both the scale-wise feature aggregation strategies as
well as the block-wise operators in the encoder-decoder network. UXNet has
several appealing benefits. (1) It significantly improves flexibility of the
classical UNet architecture, which only aggregates feature representations of
encoder and decoder in equivalent resolution. (2) A continuous relaxation of
UXNet is carefully designed, enabling its searching scheme performed in an
efficient differentiable manner. (3) Extensive experiments demonstrate the
effectiveness of UXNet compared with recent NAS methods for medical image
segmentation. The architecture discovered by UXNet outperforms existing
state-of-the-art models in terms of Dice on several public 3D medical image
segmentation benchmarks, especially for the boundary locations and tiny
tissues. The searching computational complexity of UXNet is cheap, enabling to
search a network with the best performance less than 1.5 days on two TitanXP
GPUs.
Related papers
- Improved distinct bone segmentation in upper-body CT through
multi-resolution networks [0.39583175274885335]
In distinct bone segmentation from upper body CTs a large field of view and a computationally taxing 3D architecture are required.
This leads to low-resolution results lacking detail or localisation errors due to missing spatial context.
We propose end-to-end trainable segmentation networks that combine several 3D U-Nets working at different resolutions.
arXiv Detail & Related papers (2023-01-31T14:46:16Z) - Towards Bi-directional Skip Connections in Encoder-Decoder Architectures
and Beyond [95.46272735589648]
We propose backward skip connections that bring decoded features back to the encoder.
Our design can be jointly adopted with forward skip connections in any encoder-decoder architecture.
We propose a novel two-phase Neural Architecture Search (NAS) algorithm, namely BiX-NAS, to search for the best multi-scale skip connections.
arXiv Detail & Related papers (2022-03-11T01:38:52Z) - HyperSegNAS: Bridging One-Shot Neural Architecture Search with 3D
Medical Image Segmentation using HyperNet [51.60655410423093]
We introduce HyperSegNAS to enable one-shot Neural Architecture Search (NAS) for medical image segmentation.
We show that HyperSegNAS yields better performing and more intuitive architectures compared to the previous state-of-the-art (SOTA) segmentation networks.
Our method is evaluated on public datasets from the Medical Decathlon (MSD) challenge, and achieves SOTA performances.
arXiv Detail & Related papers (2021-12-20T16:21:09Z) - BiX-NAS: Searching Efficient Bi-directional Architecture for Medical
Image Segmentation [85.0444711725392]
We study a multi-scale upgrade of a bi-directional skip connected network and then automatically discover an efficient architecture by a novel two-phase Neural Architecture Search (NAS) algorithm, namely BiX-NAS.
Our proposed method reduces the network computational cost by sifting out ineffective multi-scale features at different levels and iterations.
We evaluate BiX-NAS on two segmentation tasks using three different medical image datasets, and the experimental results show that our BiX-NAS searched architecture achieves the state-of-the-art performance with significantly lower computational cost.
arXiv Detail & Related papers (2021-06-26T14:33:04Z) - Combined Depth Space based Architecture Search For Person
Re-identification [70.86236888223569]
We aim to design a lightweight and suitable network for person re-identification (ReID)
We propose a novel search space called Combined Depth Space (CDS), based on which we search for an efficient network architecture, which we call CDNet.
We then propose a low-cost search strategy named the Top-k Sample Search strategy to make full use of the search space and avoid trapping in local optimal result.
arXiv Detail & Related papers (2021-04-09T02:40:01Z) - Deep ensembles based on Stochastic Activation Selection for Polyp
Segmentation [82.61182037130406]
This work deals with medical image segmentation and in particular with accurate polyp detection and segmentation during colonoscopy examinations.
Basic architecture in image segmentation consists of an encoder and a decoder.
We compare some variant of the DeepLab architecture obtained by varying the decoder backbone.
arXiv Detail & Related papers (2021-04-02T02:07:37Z) - DiNTS: Differentiable Neural Network Topology Search for 3D Medical
Image Segmentation [7.003867673687463]
Differentiable Network Topology Search scheme (DiNTS) is evaluated on the Medical Decathlon (MSD) challenge.
Our method achieves the state-of-the-art performance and the top ranking on the MSD challenge leaderboard.
arXiv Detail & Related papers (2021-03-29T21:02:42Z) - Dilated SpineNet for Semantic Segmentation [5.6590540986523035]
Scale-permuted networks have shown promising results on object bounding box detection and instance segmentation.
In this work, we evaluate this meta-architecture design on semantic segmentation.
We propose SpineNet-Seg, a network discovered by NAS that is searched from the DeepLabv3 system.
arXiv Detail & Related papers (2021-03-23T02:39:04Z) - KiU-Net: Overcomplete Convolutional Architectures for Biomedical Image
and Volumetric Segmentation [71.79090083883403]
"Traditional" encoder-decoder based approaches perform poorly in detecting smaller structures and are unable to segment boundary regions precisely.
We propose KiU-Net which has two branches: (1) an overcomplete convolutional network Kite-Net which learns to capture fine details and accurate edges of the input, and (2) U-Net which learns high level features.
The proposed method achieves a better performance as compared to all the recent methods with an additional benefit of fewer parameters and faster convergence.
arXiv Detail & Related papers (2020-10-04T19:23:33Z) - Dual Encoder Fusion U-Net (DEFU-Net) for Cross-manufacturer Chest X-ray
Segmentation [10.965529320634326]
We propose a dual encoder fusion U-Net framework for Chest X-rays based on Inception Convolutional Neural Network with dilation.
The DEFU-Net achieves the better performance than basic U-Net, residual U-Net, BCDU-Net, R2U-Net and attention R2U-Net.
arXiv Detail & Related papers (2020-09-11T15:57:44Z) - Boundary-aware Context Neural Network for Medical Image Segmentation [15.585851505721433]
Medical image segmentation can provide reliable basis for further clinical analysis and disease diagnosis.
Most existing CNNs-based methods produce unsatisfactory segmentation mask without accurate object boundaries.
In this paper, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation.
arXiv Detail & Related papers (2020-05-03T02:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.