Exploring the Limits of Synthetic Creation of Solar EUV Images via
Image-to-Image Translation
- URL: http://arxiv.org/abs/2208.09512v1
- Date: Fri, 19 Aug 2022 18:58:36 GMT
- Title: Exploring the Limits of Synthetic Creation of Solar EUV Images via
Image-to-Image Translation
- Authors: Valentina Salvatelli, Luiz F. G. dos Santos, Souvik Bose, Brad
Neuberg, Mark C. M. Cheung, Miho Janvier, Meng Jin, Yarin Gal, Atilim Gunes
Baydin
- Abstract summary: The Solar Dynamics Observatory (SDO) has been daily producing terabytes of observational data from the Sun.
The idea of using image-to-image translation to virtually produce extreme ultra-violet channels has been proposed in several recent studies.
This paper investigates the potential and limitations of such a deep learning approach by focusing on the permutation of four channels and an encoder-decoder based architecture.
- Score: 24.21750759187231
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Solar Dynamics Observatory (SDO), a NASA multi-spectral decade-long
mission that has been daily producing terabytes of observational data from the
Sun, has been recently used as a use-case to demonstrate the potential of
machine learning methodologies and to pave the way for future deep-space
mission planning. In particular, the idea of using image-to-image translation
to virtually produce extreme ultra-violet channels has been proposed in several
recent studies, as a way to both enhance missions with less available channels
and to alleviate the challenges due to the low downlink rate in deep space.
This paper investigates the potential and the limitations of such a deep
learning approach by focusing on the permutation of four channels and an
encoder--decoder based architecture, with particular attention to how
morphological traits and brightness of the solar surface affect the neural
network predictions. In this work we want to answer the question: can synthetic
images of the solar corona produced via image-to-image translation be used for
scientific studies of the Sun? The analysis highlights that the neural network
produces high-quality images over three orders of magnitude in count rate
(pixel intensity) and can generally reproduce the covariance across channels
within a 1% error. However the model performance drastically diminishes in
correspondence of extremely high energetic events like flares, and we argue
that the reason is related to the rareness of such events posing a challenge to
model training.
Related papers
- A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations [37.845442465099396]
This paper presents a novel statistical model that captures nuisance fluctuations using a multi-scale approach.
It integrates into an interpretable, end-to-end learnable framework for simultaneous exoplanet detection and flux estimation.
The proposed approach is computationally efficient, robust to varying data quality, and well suited for large-scale observational surveys.
arXiv Detail & Related papers (2025-03-21T13:07:55Z) - Geometry-guided Cross-view Diffusion for One-to-many Cross-view Image Synthesis [48.945931374180795]
This paper presents a novel approach for cross-view synthesis aimed at generating plausible ground-level images from corresponding satellite imagery or vice versa.
We refer to these tasks as satellite-to-ground (Sat2Grd) and ground-to-satellite (Grd2Sat) synthesis, respectively.
arXiv Detail & Related papers (2024-12-04T13:47:51Z) - SaccadeDet: A Novel Dual-Stage Architecture for Rapid and Accurate Detection in Gigapixel Images [50.742420049839474]
'SaccadeDet' is an innovative architecture for gigapixel-level object detection, inspired by the human eye saccadic movement.
Our approach, evaluated on the PANDA dataset, achieves an 8x speed increase over the state-of-the-art methods.
It also demonstrates significant potential in gigapixel-level pathology analysis through its application to Whole Slide Imaging.
arXiv Detail & Related papers (2024-07-25T11:22:54Z) - Solar synthetic imaging: Introducing denoising diffusion probabilistic models on SDO/AIA data [0.0]
This study proposes using generative deep learning models, specifically a Denoising Diffusion Probabilistic Model (DDPM), to create synthetic images of solar phenomena.
By employing a dataset from the AIA instrument aboard the SDO spacecraft, we aim to address the data scarcity issue.
The DDPM's performance is evaluated using cluster metrics, Frechet Inception Distance (FID), and F1-score, showcasing promising results in generating realistic solar imagery.
arXiv Detail & Related papers (2024-04-03T08:18:45Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - A Comparative Study on Generative Models for High Resolution Solar
Observation Imaging [59.372588316558826]
This work investigates capabilities of current state-of-the-art generative models to accurately capture the data distribution behind observed solar activity states.
Using distributed training on supercomputers, we are able to train generative models for up to 1024x1024 resolution that produce high quality samples indistinguishable to human experts.
arXiv Detail & Related papers (2023-04-14T14:40:32Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Unsupervised Discovery of Semantic Concepts in Satellite Imagery with
Style-based Wavelet-driven Generative Models [27.62417543307831]
We present the first pre-trained style- and wavelet-based GAN model that can synthesize a wide gamut of realistic satellite images.
We show that by analyzing the intermediate activations of our network, one can discover a multitude of interpretable semantic directions.
arXiv Detail & Related papers (2022-08-03T14:19:24Z) - Sat-NeRF: Learning Multi-View Satellite Photogrammetry With Transient
Objects and Shadow Modeling Using RPC Cameras [10.269997499911668]
We introduce the Satellite Neural Radiance Field (Sat-NeRF), a new end-to-end model for learning multi-view satellite photogram in the wild.
Sat-NeRF combines some of the latest trends in neural rendering with native satellite camera models.
We evaluate Sat-NeRF using WorldView-3 images from different locations and stress the advantages of applying a bundle adjustment to the satellite camera models prior to training.
arXiv Detail & Related papers (2022-03-16T19:18:46Z) - Learning Generative Vision Transformer with Energy-Based Latent Space
for Saliency Prediction [51.80191416661064]
We propose a novel vision transformer with latent variables following an informative energy-based prior for salient object detection.
Both the vision transformer network and the energy-based prior model are jointly trained via Markov chain Monte Carlo-based maximum likelihood estimation.
With the generative vision transformer, we can easily obtain a pixel-wise uncertainty map from an image, which indicates the model confidence in predicting saliency from the image.
arXiv Detail & Related papers (2021-12-27T06:04:33Z) - A Trainable Spectral-Spatial Sparse Coding Model for Hyperspectral Image
Restoration [36.525810477650026]
Hyperspectral imaging offers new perspectives for diverse applications.
The lack of accurate ground-truth "clean" hyperspectral signals on the spot makes restoration tasks challenging.
In this paper, we advocate for a hybrid approach based on sparse coding principles.
arXiv Detail & Related papers (2021-11-18T14:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.