KANDU-Net:A Dual-Channel U-Net with KAN for Medical Image Segmentation
- URL: http://arxiv.org/abs/2409.20414v1
- Date: Mon, 30 Sep 2024 15:41:51 GMT
- Title: KANDU-Net:A Dual-Channel U-Net with KAN for Medical Image Segmentation
- Authors: Chenglin Fang, Kaigui Wu,
- Abstract summary: We present a novel architecture that integrates KAN networks with U-Net.
We introduce a KAN-convolution dual-channel structure that enables the model to more effectively capture both local and global features.
Experiments conducted across multiple datasets show that our model performs well in terms of accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The U-Net model has consistently demonstrated strong performance in the field of medical image segmentation, with various improvements and enhancements made since its introduction. This paper presents a novel architecture that integrates KAN networks with U-Net, leveraging the powerful nonlinear representation capabilities of KAN networks alongside the established strengths of U-Net. We introduce a KAN-convolution dual-channel structure that enables the model to more effectively capture both local and global features. We explore effective methods for fusing features extracted by KAN with those obtained through convolutional layers, utilizing an auxiliary network to facilitate this integration process. Experiments conducted across multiple datasets show that our model performs well in terms of accuracy, indicating that the KAN-convolution dual-channel approach has significant potential in medical image segmentation tasks.
Related papers
- iiANET: Inception Inspired Attention Hybrid Network for efficient Long-Range Dependency [0.0]
We introduce iiANET (Inception Inspired Attention Network), an efficient hybrid model designed to capture long-range dependencies in complex images.
The fundamental building block, iiABlock, integrates global 2D-MHSA (Multi-Head Self-Attention) with Registers, MBConv2 (MobileNetV2-based convolution), and dilated convolution in parallel.
We serially integrate an ECANET (Efficient Channel Attention Network) at the end of each iiABlock to calibrate channel-wise attention for enhanced model performance.
arXiv Detail & Related papers (2024-07-10T12:39:02Z) - U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation [48.40120035775506]
Kolmogorov-Arnold Networks (KANs) reshape the neural network learning via the stack of non-linear learnable activation functions.
We investigate, modify and re-design the established U-Net pipeline by integrating the dedicated KAN layers on the tokenized intermediate representation, termed U-KAN.
We further delved into the potential of U-KAN as an alternative U-Net noise predictor in diffusion models, demonstrating its applicability in generating task-oriented model architectures.
arXiv Detail & Related papers (2024-06-05T04:13:03Z) - An Efficient Speech Separation Network Based on Recurrent Fusion Dilated
Convolution and Channel Attention [0.2538209532048866]
We present an efficient speech separation neural network, ARFDCN, which combines dilated convolutions, multi-scale fusion (MSF), and channel attention.
Experimental results indicate that the model achieves a decent balance between performance and computational efficiency.
arXiv Detail & Related papers (2023-06-09T13:30:27Z) - DCELANM-Net:Medical Image Segmentation based on Dual Channel Efficient
Layer Aggregation Network with Learner [0.0]
The DCELANM-Net structure is a model that ingeniously combines a Dual Channel Efficient Layer Aggregation Network (DCELAN) and a Micro Masked Autoencoder (Micro-MAE)
We adopted Micro-MAE as the learner of the model. In addition to being straightforward in its methodology, it also offers a self-supervised learning method, which has the benefit of being incredibly scaleable for the model.
arXiv Detail & Related papers (2023-04-19T12:57:52Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - Activating More Pixels in Image Super-Resolution Transformer [53.87533738125943]
Transformer-based methods have shown impressive performance in low-level vision tasks, such as image super-resolution.
We propose a novel Hybrid Attention Transformer (HAT) to activate more input pixels for better reconstruction.
Our overall method significantly outperforms the state-of-the-art methods by more than 1dB.
arXiv Detail & Related papers (2022-05-09T17:36:58Z) - Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust
Road Extraction [110.61383502442598]
We introduce a novel neural network framework termed Cross-Modal Message Propagation Network (CMMPNet)
CMMPNet is composed of two deep Auto-Encoders for modality-specific representation learning and a tailor-designed Dual Enhancement Module for cross-modal representation refinement.
Experiments on three real-world benchmarks demonstrate the effectiveness of our CMMPNet for robust road extraction.
arXiv Detail & Related papers (2021-11-30T04:30:10Z) - Distributed Conditional Generative Adversarial Networks (GANs) for
Data-Driven Millimeter Wave Communications in UAV Networks [116.94802388688653]
A novel framework is proposed to perform data-driven air-to-ground (A2G) channel estimation for millimeter wave (mmWave) communications in an unmanned aerial vehicle (UAV) wireless network.
An effective channel estimation approach is developed, allowing each UAV to train a stand-alone channel model via a conditional generative adversarial network (CGAN) along each beamforming direction.
A cooperative framework, based on a distributed CGAN architecture, is developed, allowing each UAV to collaboratively learn the mmWave channel distribution.
arXiv Detail & Related papers (2021-02-02T20:56:46Z) - Densely Connected Recurrent Residual (Dense R2UNet) Convolutional Neural
Network for Segmentation of Lung CT Images [0.342658286826597]
We present a synthesis of Recurrent CNN, Residual Network and Dense Convolutional Network based on the U-Net model architecture.
The proposed model tested on the benchmark Lung Lesion dataset showed better performance on segmentation tasks than its equivalent models.
arXiv Detail & Related papers (2021-02-01T06:34:10Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.