Explainable, Physics Aware, Trustworthy AI Paradigm Shift for Synthetic
Aperture Radar
- URL: http://arxiv.org/abs/2301.03589v1
- Date: Mon, 9 Jan 2023 09:22:13 GMT
- Title: Explainable, Physics Aware, Trustworthy AI Paradigm Shift for Synthetic
Aperture Radar
- Authors: Mihai Datcu, Zhongling Huang, Andrei Anghel, Juanping Zhao, Remus
Cacoveanu
- Abstract summary: We propose a change of paradigm for explainability in data science for the case of Synthetic Aperture Radar (SAR) data.
It aims to use explainable data transformations based on well-established models to generate inputs for AI methods.
- Score: 5.164409209168982
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recognition or understanding of the scenes observed with a SAR system
requires a broader range of cues, beyond the spatial context. These encompass
but are not limited to: imaging geometry, imaging mode, properties of the
Fourier spectrum of the images or the behavior of the polarimetric signatures.
In this paper, we propose a change of paradigm for explainability in data
science for the case of Synthetic Aperture Radar (SAR) data to ground the
explainable AI for SAR. It aims to use explainable data transformations based
on well-established models to generate inputs for AI methods, to provide
knowledgeable feedback for training process, and to learn or improve
high-complexity unknown or un-formalized models from the data. At first, we
introduce a representation of the SAR system with physical layers: i)
instrument and platform, ii) imaging formation, iii) scattering signatures and
objects, that can be integrated with an AI model for hybrid modeling.
Successively, some illustrative examples are presented to demonstrate how to
achieve hybrid modeling for SAR image understanding. The perspective of
trustworthy model and supplementary explanations are discussed later. Finally,
we draw the conclusion and we deem the proposed concept has applicability to
the entire class of coherent imaging sensors and other computational imaging
systems.
Related papers
- Electrooptical Image Synthesis from SAR Imagery Using Generative Adversarial Networks [0.0]
This research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery.
The results show significant improvements in interpretability, making SAR data more accessible for analysts familiar with EO imagery.
Our research contributes to the field of remote sensing by bridging the gap between SAR and EO imagery, offering a novel tool for enhanced data interpretation.
arXiv Detail & Related papers (2024-09-07T14:31:46Z) - SAR to Optical Image Translation with Color Supervised Diffusion Model [5.234109158596138]
This paper introduces an innovative generative model designed to transform SAR images into more intelligible optical images.
We employ SAR images as conditional guides in the sampling process and integrate color supervision to counteract color shift issues.
arXiv Detail & Related papers (2024-07-24T01:11:28Z) - Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment [82.13830107682232]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - SODA: Bottleneck Diffusion Models for Representation Learning [75.7331354734152]
We introduce SODA, a self-supervised diffusion model, designed for representation learning.
The model incorporates an image encoder, which distills a source view into a compact representation, that guides the generation of related novel views.
We show that by imposing a tight bottleneck between the encoder and a denoising decoder, we can turn diffusion models into strong representation learners.
arXiv Detail & Related papers (2023-11-29T18:53:34Z) - HawkI: Homography & Mutual Information Guidance for 3D-free Single Image to Aerial View [67.8213192993001]
We present HawkI, for synthesizing aerial-view images from text and an exemplar image.
HawkI blends the visual features from the input image within a pretrained text-to-2Dimage stable diffusion model.
At inference, HawkI employs a unique mutual information guidance formulation to steer the generated image towards faithfully replicating the semantic details of the input-image.
arXiv Detail & Related papers (2023-11-27T01:41:25Z) - SatDM: Synthesizing Realistic Satellite Image with Semantic Layout
Conditioning using Diffusion Models [0.0]
Denoising Diffusion Probabilistic Models (DDPMs) have demonstrated significant promise in synthesizing realistic images from semantic layouts.
In this paper, a conditional DDPM model capable of taking a semantic map and generating high-quality, diverse, and correspondingly accurate satellite images is implemented.
The effectiveness of our proposed model is validated using a meticulously labeled dataset introduced within the context of this study.
arXiv Detail & Related papers (2023-09-28T19:39:13Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - DiVAE: Photorealistic Images Synthesis with Denoising Diffusion Decoder [73.1010640692609]
We propose a VQ-VAE architecture model with a diffusion decoder (DiVAE) to work as the reconstructing component in image synthesis.
Our model achieves state-of-the-art results and generates more photorealistic images specifically.
arXiv Detail & Related papers (2022-06-01T10:39:12Z) - Conditional Generation of Synthetic Geospatial Images from Pixel-level
and Feature-level Inputs [0.0]
We present a conditional generative model, called VAE-Info-cGAN, for synthesizing semantically rich images simultaneously conditioned on a pixel-level condition (PLC) and a feature-level condition (FLC)
The proposed model can accurately generate various forms of macroscopic aggregates across different geographic locations while conditioned only on atemporal representation of the road network.
arXiv Detail & Related papers (2021-09-11T06:58:19Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - VAE-Info-cGAN: Generating Synthetic Images by Combining Pixel-level and
Feature-level Geospatial Conditional Inputs [0.0]
We present a conditional generative model for synthesizing semantically rich images simultaneously conditioned on a pixellevel (PLC) and a featurelevel condition (FLC)
Experiments on a GPS dataset show that the proposed model can accurately generate various forms of macroscopic aggregates across different geographic locations.
arXiv Detail & Related papers (2020-12-08T03:46:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.