TSGAN: An Optical-to-SAR Dual Conditional GAN for Optical based SAR
Temporal Shifting
- URL: http://arxiv.org/abs/2401.00440v2
- Date: Thu, 4 Jan 2024 09:43:33 GMT
- Title: TSGAN: An Optical-to-SAR Dual Conditional GAN for Optical based SAR
Temporal Shifting
- Authors: Moien Rangzan, Sara Attarchi, Richard Gloaguen, Seyed Kazem Alavipanah
- Abstract summary: This study explores the lesser-investigated domain of Optical-to-SAR translation.
We propose a novel approach, termed SAR Temporal Shifting, which inputs an optical data from the desired timestamp along with a SAR data from a different temporal point.
This model modifies the SAR data based on the changes observed in optical data to generate the SAR data for the desired timestamp.
- Score: 0.6827423171182154
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In contrast to the well-investigated field of SAR-to-Optical translation,
this study explores the lesser-investigated domain of Optical-to-SAR
translation, a challenging field due to the ill-posed nature of this
translation. The complexity arises as a single optical data can have multiple
SAR representations based on the SAR viewing geometry. We propose a novel
approach, termed SAR Temporal Shifting, which inputs an optical data from the
desired timestamp along with a SAR data from a different temporal point but
with a consistent viewing geometry as the expected SAR data, both complemented
with a change map of optical data during the intervening period. This model
modifies the SAR data based on the changes observed in optical data to generate
the SAR data for the desired timestamp. Our model, a dual conditional
Generative Adversarial Network (GAN), named Temporal Shifting GAN (TSGAN),
incorporates a siamese encoder in both the Generator and the Discriminator. To
prevent the model from overfitting on the input SAR data, we employed a change
weighted loss function. Our approach surpasses traditional translation methods
by eliminating the GAN's fiction phenomenon, particularly in unchanged regions,
resulting in higher SSIM and PSNR in these areas. Additionally, modifications
to the Pix2Pix architecture and the inclusion of attention mechanisms have
enhanced the model's performance on all regions of the data. This research
paves the way for leveraging legacy optical datasets, the most abundant and
longstanding source of Earth imagery data, extending their use to SAR domains
and temporal analyses. To foster further research, we provide the code,
datasets used in our study, and a framework for generating paired SAR-Optical
datasets for new regions of interest. These resources are available on
github.com/moienr/TemporalGAN
Related papers
- Electrooptical Image Synthesis from SAR Imagery Using Generative Adversarial Networks [0.0]
This research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery.
The results show significant improvements in interpretability, making SAR data more accessible for analysts familiar with EO imagery.
Our research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery, offering a novel tool for enhanced data interpretation.
arXiv Detail & Related papers (2024-09-07T14:31:46Z) - Conditional Brownian Bridge Diffusion Model for VHR SAR to Optical Image Translation [5.578820789388206]
This paper introduces a conditional image-to-image translation approach based on Brownian Bridge Diffusion Model (BBDM)
We conducted comprehensive experiments on the MSAW dataset, a paired SAR and optical images collection of 0.5m Very-High-Resolution (VHR)
arXiv Detail & Related papers (2024-08-15T05:43:46Z) - Multi-Source and Test-Time Domain Adaptation on Multivariate Signals using Spatio-Temporal Monge Alignment [59.75420353684495]
Machine learning applications on signals such as computer vision or biomedical data often face challenges due to the variability that exists across hardware devices or session recordings.
In this work, we propose Spatio-Temporal Monge Alignment (STMA) to mitigate these variabilities.
We show that STMA leads to significant and consistent performance gains between datasets acquired with very different settings.
arXiv Detail & Related papers (2024-07-19T13:33:38Z) - SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection [79.23689506129733]
We establish a new benchmark dataset and an open-source method for large-scale SAR object detection.
Our dataset, SARDet-100K, is a result of intense surveying, collecting, and standardizing 10 existing SAR detection datasets.
To the best of our knowledge, SARDet-100K is the first COCO-level large-scale multi-class SAR object detection dataset ever created.
arXiv Detail & Related papers (2024-03-11T09:20:40Z) - Deep learning-based deconvolution for interferometric radio transient
reconstruction [0.39259415717754914]
Radio astronomy facilities like LOFAR, MeerKAT/SKA, ASKAP/SKA, and the future SKA-LOW bring tremendous sensitivity in time and frequency.
These facilities enable advanced studies of radio transients, volatile by nature, that can be detected or missed in the data.
These transients are markers of high-energy accelerations of electrons and manifest in a wide range of temporal scales.
arXiv Detail & Related papers (2023-06-24T08:58:52Z) - Imbalanced Aircraft Data Anomaly Detection [103.01418862972564]
Anomaly detection in temporal data from sensors under aviation scenarios is a practical but challenging task.
We propose a Graphical Temporal Data Analysis framework.
It consists three modules, named Series-to-Image (S2I), Cluster-based Resampling Approach using Euclidean Distance (CRD) and Variance-Based Loss (VBL)
arXiv Detail & Related papers (2023-05-17T09:37:07Z) - A SAR speckle filter based on Residual Convolutional Neural Networks [68.8204255655161]
This work aims to present a novel method for filtering the speckle noise from Sentinel-1 data by applying Deep Learning (DL) algorithms, based on Convolutional Neural Networks (CNNs)
The obtained results, if compared with the state of the art, show a clear improvement in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM)
arXiv Detail & Related papers (2021-04-19T14:43:07Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - SAR2SAR: a semi-supervised despeckling algorithm for SAR images [3.9490074068698]
Deep learning algorithm with self-supervision is proposed in this paper: SAR2SAR.
The strategy to adapt it to SAR despeckling is presented, based on a compensation of temporal changes and a loss function adapted to the statistics of speckle.
Results on real images are discussed, to show the potential of the proposed algorithm.
arXiv Detail & Related papers (2020-06-26T15:07:28Z) - Weakly-supervised land classification for coastal zone based on deep convolutional neural networks by incorporating dual-polarimetric characteristics into training dataset [1.0494061710470493]
We explore the performance of DCNNs on semantic segmentation using spaceborne polarimetric synthetic aperture radar (PolSAR) datasets.
The semantic segmentation task using PolSAR data can be categorized as weakly supervised learning when the characteristics of SAR data and data annotating procedures are factored in.
Three DCNN models, including SegNet, U-Net, and LinkNet, are implemented next.
arXiv Detail & Related papers (2020-03-30T17:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.