DRU-net: An Efficient Deep Convolutional Neural Network for Medical
Image Segmentation
- URL: http://arxiv.org/abs/2004.13453v1
- Date: Tue, 28 Apr 2020 12:16:24 GMT
- Title: DRU-net: An Efficient Deep Convolutional Neural Network for Medical
Image Segmentation
- Authors: Mina Jafari, Dorothee Auer, Susan Francis, Jonathan Garibaldi, Xin
Chen
- Abstract summary: Residual network (ResNet) and densely connected network (DenseNet) have significantly improved the training efficiency and performance of deep convolutional neural networks (DCNNs)
We propose an efficient network architecture by considering advantages of both networks.
- Score: 2.3574651879602215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Residual network (ResNet) and densely connected network (DenseNet) have
significantly improved the training efficiency and performance of deep
convolutional neural networks (DCNNs) mainly for object classification tasks.
In this paper, we propose an efficient network architecture by considering
advantages of both networks. The proposed method is integrated into an
encoder-decoder DCNN model for medical image segmentation. Our method adds
additional skip connections compared to ResNet but uses significantly fewer
model parameters than DenseNet. We evaluate the proposed method on a public
dataset (ISIC 2018 grand-challenge) for skin lesion segmentation and a local
brain MRI dataset. In comparison with ResNet-based, DenseNet-based and
attention network (AttnNet) based methods within the same encoder-decoder
network structure, our method achieves significantly higher segmentation
accuracy with fewer number of model parameters than DenseNet and AttnNet. The
code is available on GitHub (GitHub link: https://github.com/MinaJf/DRU-net).
Related papers
- SODAWideNet -- Salient Object Detection with an Attention augmented Wide
Encoder Decoder network without ImageNet pre-training [3.66237529322911]
We explore developing a neural network from scratch directly trained on Salient Object Detection without ImageNet pre-training.
We propose SODAWideNet, an encoder-decoder-style network for Salient Object Detection.
Two variants, SODAWideNet-S (3.03M) and SODAWideNet (9.03M), achieve competitive performance against state-of-the-art models on five datasets.
arXiv Detail & Related papers (2023-11-08T16:53:44Z) - Neural network relief: a pruning algorithm based on neural activity [47.57448823030151]
We propose a simple importance-score metric that deactivates unimportant connections.
We achieve comparable performance for LeNet architectures on MNIST.
The algorithm is not designed to minimize FLOPs when considering current hardware and software implementations.
arXiv Detail & Related papers (2021-09-22T15:33:49Z) - Adder Neural Networks [75.54239599016535]
We present adder networks (AdderNets) to trade massive multiplications in deep neural networks.
In AdderNets, we take the $ell_p$-norm distance between filters and input feature as the output response.
We show that the proposed AdderNets can achieve 75.7% Top-1 accuracy 92.3% Top-5 accuracy using ResNet-50 on the ImageNet dataset.
arXiv Detail & Related papers (2021-05-29T04:02:51Z) - BCNet: Searching for Network Width with Bilaterally Coupled Network [56.14248440683152]
We introduce a new supernet called Bilaterally Coupled Network (BCNet) to address this issue.
In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.
Our method achieves state-of-the-art or competing performance over other baseline methods.
arXiv Detail & Related papers (2021-05-21T18:54:03Z) - Densely Connected Recurrent Residual (Dense R2UNet) Convolutional Neural
Network for Segmentation of Lung CT Images [0.342658286826597]
We present a synthesis of Recurrent CNN, Residual Network and Dense Convolutional Network based on the U-Net model architecture.
The proposed model tested on the benchmark Lung Lesion dataset showed better performance on segmentation tasks than its equivalent models.
arXiv Detail & Related papers (2021-02-01T06:34:10Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - FNA++: Fast Network Adaptation via Parameter Remapping and Architecture
Search [35.61441231491448]
We propose a Fast Network Adaptation (FNA++) method, which can adapt both the architecture and parameters of a seed network.
In our experiments, we apply FNA++ on MobileNetV2 to obtain new networks for semantic segmentation, object detection, and human pose estimation.
The total computation cost of FNA++ is significantly less than SOTA segmentation and detection NAS approaches.
arXiv Detail & Related papers (2020-06-21T10:03:34Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z) - Fast Neural Network Adaptation via Parameter Remapping and Architecture
Search [35.61441231491448]
Deep neural networks achieve remarkable performance in many computer vision tasks.
Most state-of-the-art (SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone.
One major challenge though, is that ImageNet pre-training of the search space representation incurs huge computational cost.
In this paper, we propose a Fast Neural Network Adaptation (FNA) method, which can adapt both the architecture and parameters of a seed network.
arXiv Detail & Related papers (2020-01-08T13:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.