Fully Complex-valued Fully Convolutional Multi-feature Fusion Network
(FC2MFN) for Building Segmentation of InSAR images
- URL: http://arxiv.org/abs/2212.07084v1
- Date: Wed, 14 Dec 2022 08:17:39 GMT
- Title: Fully Complex-valued Fully Convolutional Multi-feature Fusion Network
(FC2MFN) for Building Segmentation of InSAR images
- Authors: Aniruddh Sikdar, Sumanth Udupa, Suresh Sundaram, Narasimhan
Sundararajan
- Abstract summary: This paper proposes a Fully Complex-valued, Fully Convolutional Multi-feature Fusion Network (FC2MFN) for building semantic segmentation on InSAR images.
For the particularity of complex-valued InSAR data, a new complex-valued pooling layer is proposed that compares complex numbers considering their magnitude and phase.
FC2MFN achieves better results compared to other state-of-the-art methods in terms of segmentation performance and model complexity.
- Score: 7.3045725197814875
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Building segmentation in high-resolution InSAR images is a challenging task
that can be useful for large-scale surveillance. Although complex-valued deep
learning networks perform better than their real-valued counterparts for
complex-valued SAR data, phase information is not retained throughout the
network, which causes a loss of information. This paper proposes a Fully
Complex-valued, Fully Convolutional Multi-feature Fusion Network(FC2MFN) for
building semantic segmentation on InSAR images using a novel, fully
complex-valued learning scheme. The network learns multi-scale features,
performs multi-feature fusion, and has a complex-valued output. For the
particularity of complex-valued InSAR data, a new complex-valued pooling layer
is proposed that compares complex numbers considering their magnitude and
phase. This helps the network retain the phase information even through the
pooling layer. Experimental results on the simulated InSAR dataset show that
FC2MFN achieves better results compared to other state-of-the-art methods in
terms of segmentation performance and model complexity.
Related papers
- PolSAM: Polarimetric Scattering Mechanism Informed Segment Anything Model [76.95536611263356]
PolSAR data presents unique challenges due to its rich and complex characteristics.
Existing data representations, such as complex-valued data, polarimetric features, and amplitude images, are widely used.
Most feature extraction networks for PolSAR are small, limiting their ability to capture features effectively.
We propose the Polarimetric Scattering Mechanism-Informed SAM (PolSAM), an enhanced Segment Anything Model (SAM) that integrates domain-specific scattering characteristics and a novel prompt generation strategy.
arXiv Detail & Related papers (2024-12-17T09:59:53Z) - ICFRNet: Image Complexity Prior Guided Feature Refinement for Real-time Semantic Segmentation [21.292293903662927]
We leverage image complexity as a prior for refining segmentation features to achieve accurate real-time semantic segmentation.
We propose the Image Complexity prior-guided Feature Refinement Network (ICFRNet)
This network aggregates both complexity and segmentation features to produce an attention map for refining segmentation features.
arXiv Detail & Related papers (2024-08-25T08:42:24Z) - Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation [63.15257949821558]
Referring Remote Sensing Image (RRSIS) is a new challenge that combines computer vision and natural language processing.
Traditional Referring Image (RIS) approaches have been impeded by the complex spatial scales and orientations found in aerial imagery.
We introduce the Rotated Multi-Scale Interaction Network (RMSIN), an innovative approach designed for the unique demands of RRSIS.
arXiv Detail & Related papers (2023-12-19T08:14:14Z) - ESDMR-Net: A Lightweight Network With Expand-Squeeze and Dual Multiscale
Residual Connections for Medical Image Segmentation [7.921517156237902]
This paper presents an expand-squeeze dual multiscale residual network ( ESDMR-Net)
It is a fully convolutional network that is well-suited for resource-constrained computing hardware such as mobile devices.
We present experiments on seven datasets from five distinct examples of applications.
arXiv Detail & Related papers (2023-12-17T02:15:49Z) - Multi-scale MRI reconstruction via dilated ensemble networks [2.8755060609190086]
We introduce an efficient multi-scale reconstruction network using dilated convolutions to preserve resolution.
Inspired by parallel dilated filters, multiple receptive fields are processed simultaneously with branches that see both large structural artefacts and fine local features.
arXiv Detail & Related papers (2023-10-07T06:49:57Z) - Transformer-based Context Condensation for Boosting Feature Pyramids in
Object Detection [77.50110439560152]
Current object detectors typically have a feature pyramid (FP) module for multi-level feature fusion (MFF)
We propose a novel and efficient context modeling mechanism that can help existing FPs deliver better MFF results.
In particular, we introduce a novel insight that comprehensive contexts can be decomposed and condensed into two types of representations for higher efficiency.
arXiv Detail & Related papers (2022-07-14T01:45:03Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Sequential Hierarchical Learning with Distribution Transformation for
Image Super-Resolution [83.70890515772456]
We build a sequential hierarchical learning super-resolution network (SHSR) for effective image SR.
We consider the inter-scale correlations of features, and devise a sequential multi-scale block (SMB) to progressively explore the hierarchical information.
Experiment results show SHSR achieves superior quantitative performance and visual quality to state-of-the-art methods.
arXiv Detail & Related papers (2020-07-19T01:35:53Z) - Analysis of Deep Complex-Valued Convolutional Neural Networks for MRI
Reconstruction [9.55767753037496]
We investigate end-to-end complex-valued convolutional neural networks for image reconstruction in lieu of two-channel real-valued networks.
We find that complex-valued CNNs with complex-valued convolutions provide superior reconstructions compared to real-valued convolutions with the same number of trainable parameters.
arXiv Detail & Related papers (2020-04-03T19:00:23Z) - Co-VeGAN: Complex-Valued Generative Adversarial Network for Compressive
Sensing MR Image Reconstruction [8.856953486775716]
We propose a novel framework based on a complex-valued adversarial network (Co-VeGAN) to process complex-valued input.
Our model can process complex-valued input, which enables it to perform high-quality reconstruction of the CS-MR images.
arXiv Detail & Related papers (2020-02-24T20:28:49Z) - Dense Residual Network: Enhancing Global Dense Feature Flow for
Character Recognition [75.4027660840568]
This paper explores how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers.
Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN) for text recognition.
arXiv Detail & Related papers (2020-01-23T06:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.