Coupling Model-Driven and Data-Driven Methods for Remote Sensing Image
Restoration and Fusion
- URL: http://arxiv.org/abs/2108.06073v1
- Date: Fri, 13 Aug 2021 06:00:31 GMT
- Title: Coupling Model-Driven and Data-Driven Methods for Remote Sensing Image
Restoration and Fusion
- Authors: Huanfeng Shen, Menghui Jiang, Jie Li, Chenxia Zhou, Qiangqiang Yuan
and Liangpei Zhang
- Abstract summary: The model-driven methods consider the imaging mechanism, which is deterministic and theoretically reasonable.
The data-driven methods have a stronger prior knowledge learning capability for huge data.
The interpretability of the networks is poor, and they are over-dependent on training data.
- Score: 19.728138983829872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the fields of image restoration and image fusion, model-driven methods and
data-driven methods are the two representative frameworks. However, both
approaches have their respective advantages and disadvantages. The model-driven
methods consider the imaging mechanism, which is deterministic and
theoretically reasonable; however, they cannot easily model complicated
nonlinear problems. The data-driven methods have a stronger prior knowledge
learning capability for huge data, especially for nonlinear statistical
features; however, the interpretability of the networks is poor, and they are
over-dependent on training data. In this paper, we systematically investigate
the coupling of model-driven and data-driven methods, which has rarely been
considered in the remote sensing image restoration and fusion communities. We
are the first to summarize the coupling approaches into the following three
categories: 1) data-driven and model-driven cascading methods; 2) variational
models with embedded learning; and 3) model-constrained network learning
methods. The typical existing and potential coupling methods for remote sensing
image restoration and fusion are introduced with application examples. This
paper also gives some new insights into the potential future directions, in
terms of both methods and applications.
Related papers
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities [89.40778301238642]
Model merging is an efficient empowerment technique in the machine learning community.
There is a significant gap in the literature regarding a systematic and thorough review of these techniques.
arXiv Detail & Related papers (2024-08-14T16:58:48Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures [14.551812310439004]
We introduce an untrained forward model residual block within the model-based architecture to match the data consistency in the measurement domain for each instance.
Our approach offers a unified solution that is less parameter-sensitive, requires no additional data, and enables simultaneous fitting of the forward model and reconstruction in a single pass.
arXiv Detail & Related papers (2024-03-07T19:02:13Z) - Rethinking Generative Methods for Image Restoration in Physics-based
Vision: A Theoretical Analysis from the Perspective of Information [19.530052941884996]
End-to-end generative methods are considered a more promising solution for image restoration in physics-based vision.
However, existing generative methods still have plenty of room for improvement in quantitative performance.
In this study, we try to re-interpret these generative methods for image restoration tasks using information theory.
arXiv Detail & Related papers (2022-12-05T12:16:27Z) - Deep Learning for Material Decomposition in Photon-Counting CT [0.5801044612920815]
We present a novel deep-learning solution for material decomposition in PCCT, based on an unrolled/unfolded iterative network.
Our approach outperforms a maximum likelihood estimation, a variational method, as well as a fully-learned network.
arXiv Detail & Related papers (2022-08-05T19:05:16Z) - Discriminative Multimodal Learning via Conditional Priors in Generative
Models [21.166519800652047]
This research studies the realistic scenario in which all modalities and class labels are available for model training.
We show, in this scenario, that the variational lower bound limits mutual information between joint representations and missing modalities.
arXiv Detail & Related papers (2021-10-09T17:22:24Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Deep Unfolding Network for Image Super-Resolution [159.50726840791697]
This paper proposes an end-to-end trainable unfolding network which leverages both learning-based methods and model-based methods.
The proposed network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model.
arXiv Detail & Related papers (2020-03-23T17:55:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.