OBSUM: An object-based spatial unmixing model for spatiotemporal fusion
of remote sensing images
- URL: http://arxiv.org/abs/2310.09517v1
- Date: Sat, 14 Oct 2023 07:07:27 GMT
- Title: OBSUM: An object-based spatial unmixing model for spatiotemporal fusion
of remote sensing images
- Authors: Houcai Guo, Dingqi Ye, Lorenzo Bruzzone
- Abstract summary: This study proposes Object-Based Spatial Unmixing Model (OBSUM), which incorporates object-based image analysis and spatial unmixing.
OBSUM can be applied using only one fine image at the base resolution date and one coarse image date, without the need of a coarse image at the base date date.
It has great potential to generate accurate and high-resolution time-series for supporting various remote sensing applications.
- Score: 12.94382743563284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatiotemporal fusion aims to improve both the spatial and temporal
resolution of remote sensing images, thus facilitating time-series analysis at
a fine spatial scale. However, there are several important issues that limit
the application of current spatiotemporal fusion methods. First, most
spatiotemporal fusion methods are based on pixel-level computation, which
neglects the valuable object-level information of the land surface. Moreover,
many existing methods cannot accurately retrieve strong temporal changes
between the available high-resolution image at base date and the predicted one.
This study proposes an Object-Based Spatial Unmixing Model (OBSUM), which
incorporates object-based image analysis and spatial unmixing, to overcome the
two abovementioned problems. OBSUM consists of one preprocessing step and three
fusion steps, i.e., object-level unmixing, object-level residual compensation,
and pixel-level residual compensation. OBSUM can be applied using only one fine
image at the base date and one coarse image at the prediction date, without the
need of a coarse image at the base date. The performance of OBSUM was compared
with five representative spatiotemporal fusion methods. The experimental
results demonstrated that OBSUM outperformed other methods in terms of both
accuracy indices and visual effects over time-series. Furthermore, OBSUM also
achieved satisfactory results in two typical remote sensing applications.
Therefore, it has great potential to generate accurate and high-resolution
time-series observations for supporting various remote sensing applications.
Related papers
- DVMNet: Computing Relative Pose for Unseen Objects Beyond Hypotheses [59.51874686414509]
Current approaches approximate the continuous pose representation with a large number of discrete pose hypotheses.
We present a Deep Voxel Matching Network (DVMNet) that eliminates the need for pose hypotheses and computes the relative object pose in a single pass.
Our method delivers more accurate relative pose estimates for novel objects at a lower computational cost compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-03-20T15:41:32Z) - Physics-Inspired Degradation Models for Hyperspectral Image Fusion [61.743696362028246]
Most fusion methods solely focus on the fusion algorithm itself and overlook the degradation models.
We propose physics-inspired degradation models (PIDM) to model the degradation of LR-HSI and HR-MSI.
Our proposed PIDM can boost the fusion performance of existing fusion methods in practical scenarios.
arXiv Detail & Related papers (2024-02-04T09:07:28Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - Detecting changes to sub-diffraction objects with quantum-optimal speed
and accuracy [0.8409980020848168]
We evaluate the best possible average latency, for a fixed false alarm rate, for sub-diffraction incoherent imaging.
We find that direct focal-plane detection of the incident optical intensity achieves sub-optimal detection latencies.
We verify these results via Monte Carlo simulation of the change detection procedure and quantify a growing gap between the conventional and quantum-optimal receivers.
arXiv Detail & Related papers (2023-08-14T16:48:18Z) - Hyperspectral and Multispectral Image Fusion Using the Conditional
Denoising Diffusion Probabilistic Model [18.915369996829984]
We propose a deep fusion method based on the conditional denoising diffusion probabilistic model, called DDPM-Fus.
Experiments conducted on one indoor and two remote sensing datasets show the superiority of the proposed model when compared with other advanced deep learningbased fusion methods.
arXiv Detail & Related papers (2023-07-07T07:08:52Z) - Cloud removal Using Atmosphere Model [7.259230333873744]
Cloud removal is an essential task in remote sensing data analysis.
We propose to use scattering model for temporal sequence of images of any scene in the framework of low rank and sparse models.
We develop a semi-realistic simulation method to produce cloud cover so that various methods can be quantitatively analysed.
arXiv Detail & Related papers (2022-10-05T01:29:19Z) - Understanding the Impact of Image Quality and Distance of Objects to
Object Detection Performance [11.856281907276145]
This paper examines the impact of spatial and amplitude resolution, as well as object distance, on object detection accuracy and computational cost.
We develop a resolution-adaptive variant of YOLOv5 (RA-YOLO), which varies the number of scales in the feature pyramid and detection head based on the spatial resolution of the input image.
arXiv Detail & Related papers (2022-09-17T04:05:01Z) - Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based
Motion Recognition [62.46544616232238]
Previous motion recognition methods have achieved promising performance through the tightly coupled multi-temporal representation.
We propose to decouple and recouple caused caused representation for RGB-D-based motion recognition.
arXiv Detail & Related papers (2021-12-16T18:59:47Z) - Learning to Estimate Hidden Motions with Global Motion Aggregation [71.12650817490318]
Occlusions pose a significant challenge to optical flow algorithms that rely on local evidences.
We introduce a global motion aggregation module to find long-range dependencies between pixels in the first image.
We demonstrate that the optical flow estimates in the occluded regions can be significantly improved without damaging the performance in non-occluded regions.
arXiv Detail & Related papers (2021-04-06T10:32:03Z) - Predicting Landsat Reflectance with Deep Generative Fusion [2.867517731896504]
Public satellite missions are commonly bound to a trade-off between spatial and temporal resolution.
This hinders their potential to assist vegetation monitoring or humanitarian actions.
We probe the potential of deep generative models to produce high-resolution optical imagery by fusing products with different spatial and temporal characteristics.
arXiv Detail & Related papers (2020-11-09T21:06:04Z) - Hyperspectral-Multispectral Image Fusion with Weighted LASSO [68.04032419397677]
We propose an approach for fusing hyperspectral and multispectral images to provide high-quality hyperspectral output.
We demonstrate that the proposed sparse fusion and reconstruction provides quantitatively superior results when compared to existing methods on publicly available images.
arXiv Detail & Related papers (2020-03-15T23:07:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.