DCELANM-Net:Medical Image Segmentation based on Dual Channel Efficient
Layer Aggregation Network with Learner
- URL: http://arxiv.org/abs/2304.09620v1
- Date: Wed, 19 Apr 2023 12:57:52 GMT
- Title: DCELANM-Net:Medical Image Segmentation based on Dual Channel Efficient
Layer Aggregation Network with Learner
- Authors: Chengzhun Lu, Zhangrun Xia, Krzysztof Przystupa, Orest Kochan, Jun Su
- Abstract summary: The DCELANM-Net structure is a model that ingeniously combines a Dual Channel Efficient Layer Aggregation Network (DCELAN) and a Micro Masked Autoencoder (Micro-MAE)
We adopted Micro-MAE as the learner of the model. In addition to being straightforward in its methodology, it also offers a self-supervised learning method, which has the benefit of being incredibly scaleable for the model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The DCELANM-Net structure, which this article offers, is a model that
ingeniously combines a Dual Channel Efficient Layer Aggregation Network
(DCELAN) and a Micro Masked Autoencoder (Micro-MAE). On the one hand, for the
DCELAN, the features are more effectively fitted by deepening the network
structure; the deeper network can successfully learn and fuse the features,
which can more accurately locate the local feature information; and the
utilization of each layer of channels is more effectively improved by widening
the network structure and residual connections. We adopted Micro-MAE as the
learner of the model. In addition to being straightforward in its methodology,
it also offers a self-supervised learning method, which has the benefit of
being incredibly scaleable for the model.
Related papers
- KANDU-Net:A Dual-Channel U-Net with KAN for Medical Image Segmentation [0.0]
We present a novel architecture that integrates KAN networks with U-Net.
We introduce a KAN-convolution dual-channel structure that enables the model to more effectively capture both local and global features.
Experiments conducted across multiple datasets show that our model performs well in terms of accuracy.
arXiv Detail & Related papers (2024-09-30T15:41:51Z) - An Efficient Speech Separation Network Based on Recurrent Fusion Dilated
Convolution and Channel Attention [0.2538209532048866]
We present an efficient speech separation neural network, ARFDCN, which combines dilated convolutions, multi-scale fusion (MSF), and channel attention.
Experimental results indicate that the model achieves a decent balance between performance and computational efficiency.
arXiv Detail & Related papers (2023-06-09T13:30:27Z) - Efficient Encoder-Decoder and Dual-Path Conformer for Comprehensive
Feature Learning in Speech Enhancement [0.2538209532048866]
This paper proposes a time-frequency (T-F) domain speech enhancement network (DPCFCS-Net)
It incorporates improved densely connected blocks, dual-path modules, convolution-augmented transformers (conformers), channel attention, and spatial attention.
Compared with previous models, our proposed model has a more efficient encoder-decoder and can learn comprehensive features.
arXiv Detail & Related papers (2023-06-09T12:52:01Z) - EMC2A-Net: An Efficient Multibranch Cross-channel Attention Network for
SAR Target Classification [10.479559839534033]
This paper proposed two residual blocks, namely EMC2A blocks with multiscale receptive fields(RFs), based on a multibranch structure and then designed an efficient isotopic architecture deep CNN (DCNN), EMC2A-Net.
EMC2A blocks utilize parallel dilated convolution with different dilation rates, which can effectively capture multiscale context features without significantly increasing the computational burden.
This paper proposed a multiscale feature cross-channel attention module, namely the EMC2A module, adopting a local multiscale feature interaction strategy without dimensionality reduction.
arXiv Detail & Related papers (2022-08-03T04:31:52Z) - Deep Image Clustering with Contrastive Learning and Multi-scale Graph
Convolutional Networks [58.868899595936476]
This paper presents a new deep clustering approach termed image clustering with contrastive learning and multi-scale graph convolutional networks (IcicleGCN)
Experiments on multiple image datasets demonstrate the superior clustering performance of IcicleGCN over the state-of-the-art.
arXiv Detail & Related papers (2022-07-14T19:16:56Z) - Channel-wise Knowledge Distillation for Dense Prediction [73.99057249472735]
We propose to align features channel-wise between the student and teacher networks.
We consistently achieve superior performance on three benchmarks with various network structures.
arXiv Detail & Related papers (2020-11-26T12:00:38Z) - Convolutional Neural Network optimization via Channel Reassessment
Attention module [19.566271646280978]
We propose a novel network optimization module called Channel Reassessment (CRA) module.
CRA module uses channel attentions with spatial information of feature maps to enhance representational power of networks.
Experiments on ImageNet and MS datasets demonstrate that embedding CRA module on various networks effectively improves the performance under different evaluation standards.
arXiv Detail & Related papers (2020-10-12T11:27:17Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Efficient Crowd Counting via Structured Knowledge Transfer [122.30417437707759]
Crowd counting is an application-oriented task and its inference efficiency is crucial for real-world applications.
We propose a novel Structured Knowledge Transfer framework to generate a lightweight but still highly effective student network.
Our models obtain at least 6.5$times$ speed-up on an Nvidia 1080 GPU and even achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-03-23T08:05:41Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.