A Model-data-driven Network Embedding Multidimensional Features for
Tomographic SAR Imaging
- URL: http://arxiv.org/abs/2211.15002v1
- Date: Mon, 28 Nov 2022 02:01:43 GMT
- Title: A Model-data-driven Network Embedding Multidimensional Features for
Tomographic SAR Imaging
- Authors: Yu Ren, Xiaoling Zhang, Xu Zhan, Jun Shi, Shunjun Wei, Tianjiao Zeng
- Abstract summary: We propose a new model-data-driven network to achieve tomoSAR imaging based on multi-dimensional features.
We add two 2D processing modules, both convolutional encoder-decoder structures, to enhance multi-dimensional features of the imaging scene effectively.
Compared with the conventional CS-based FISTA method and DL-based gamma-Net method, the result of our proposed method has better performance on completeness while having decent imaging accuracy.
- Score: 5.489791364472879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL)-based tomographic SAR imaging algorithms are gradually
being studied. Typically, they use an unfolding network to mimic the iterative
calculation of the classical compressive sensing (CS)-based methods and process
each range-azimuth unit individually. However, only one-dimensional features
are effectively utilized in this way. The correlation between adjacent
resolution units is ignored directly. To address that, we propose a new
model-data-driven network to achieve tomoSAR imaging based on multi-dimensional
features. Guided by the deep unfolding methodology, a two-dimensional deep
unfolding imaging network is constructed. On the basis of it, we add two 2D
processing modules, both convolutional encoder-decoder structures, to enhance
multi-dimensional features of the imaging scene effectively. Meanwhile, to
train the proposed multifeature-based imaging network, we construct a tomoSAR
simulation dataset consisting entirely of simulation data of buildings.
Experiments verify the effectiveness of the model. Compared with the
conventional CS-based FISTA method and DL-based gamma-Net method, the result of
our proposed method has better performance on completeness while having decent
imaging accuracy.
Related papers
- Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - Simple 2D Convolutional Neural Network-based Approach for COVID-19 Detection [8.215897530386343]
This study explores the use of deep learning techniques for analyzing lung Computed Tomography (CT) images.
We propose an advanced Spatial-Slice Feature Learning (SSFL++) framework specifically tailored for CT scans.
It aims to filter out out out-of-distribution (OOD) data within the entire CT scan, allowing us to select essential spatial-slice features for analysis by reducing data redundancy by 70%.
arXiv Detail & Related papers (2024-03-17T14:34:51Z) - The R2D2 deep neural network series paradigm for fast precision imaging in radio astronomy [1.7249361224827533]
Recent image reconstruction techniques have remarkable capability for imaging precision, well beyond CLEAN's capability.
We introduce a novel deep learning approach, dubbed "Residual-to-Residual DNN series for high-Dynamic range imaging"
R2D2's capability to deliver high precision is demonstrated in simulation, across a variety image observation settings using the Very Large Array (VLA)
arXiv Detail & Related papers (2024-03-08T16:57:54Z) - Multi-Projection Fusion and Refinement Network for Salient Object
Detection in 360{\deg} Omnidirectional Image [141.10227079090419]
We propose a Multi-Projection Fusion and Refinement Network (MPFR-Net) to detect the salient objects in 360deg omnidirectional image.
MPFR-Net uses the equirectangular projection image and four corresponding cube-unfolding images as inputs.
Experimental results on two omnidirectional datasets demonstrate that the proposed approach outperforms the state-of-the-art methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-23T14:50:40Z) - Model Inspired Autoencoder for Unsupervised Hyperspectral Image
Super-Resolution [25.878793557013207]
This paper focuses on hyperspectral image (HSI) super-resolution that aims to fuse a low-spatial-resolution HSI and a high-spatial-resolution multispectral image.
Existing deep learning-based approaches are mostly supervised that rely on a large number of labeled training samples.
We make the first attempt to design a model inspired deep network for HSI super-resolution in an unsupervised manner.
arXiv Detail & Related papers (2021-10-22T05:15:16Z) - Dynamic Proximal Unrolling Network for Compressive Sensing Imaging [29.00266254916676]
We present a dynamic proximal unrolling network (dubbed DPUNet), which can handle a variety of measurement matrices via one single model without retraining.
Specifically, DPUNet can exploit both embedded physical model via gradient descent and imposing image prior with learned dynamic proximal mapping.
Experimental results demonstrate that the proposed DPUNet can effectively handle multiple CSI modalities under varying sampling ratios and noise levels with only one model.
arXiv Detail & Related papers (2021-07-23T03:04:44Z) - Adaptive Context-Aware Multi-Modal Network for Depth Completion [107.15344488719322]
We propose to adopt the graph propagation to capture the observed spatial contexts.
We then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively.
Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively.
Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks.
arXiv Detail & Related papers (2020-08-25T06:00:06Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z) - DeepEMD: Differentiable Earth Mover's Distance for Few-Shot Learning [122.51237307910878]
We develop methods for few-shot image classification from a new perspective of optimal matching between image regions.
We employ the Earth Mover's Distance (EMD) as a metric to compute a structural distance between dense image representations.
To generate the important weights of elements in the formulation, we design a cross-reference mechanism.
arXiv Detail & Related papers (2020-03-15T08:13:16Z) - Concurrently Extrapolating and Interpolating Networks for Continuous
Model Generation [34.72650269503811]
We propose a simple yet effective model generation strategy to form a sequence of models that only requires a set of specific-effect label images.
We show that the proposed method is capable of producing a series of continuous models and achieves better performance than that of several state-of-the-art methods for image smoothing.
arXiv Detail & Related papers (2020-01-12T04:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.