Homogenising SoHO/EIT and SDO/AIA 171\AA$~$ Images: A Deep Learning
Approach
- URL: http://arxiv.org/abs/2308.10322v1
- Date: Sun, 20 Aug 2023 17:17:27 GMT
- Title: Homogenising SoHO/EIT and SDO/AIA 171\AA$~$ Images: A Deep Learning
Approach
- Authors: Subhamoy Chatterjee, Andr\'es Mu\~noz-Jaramillo, Maher Dayeh, Hazel M.
Bain, Kimberly Moreland
- Abstract summary: Extreme Ultraviolet images of the Sun are becoming an integral part of space weather prediction tasks.
We utilize the temporal overlap of SoHO/EIT and SDO/AIA 171AA Surveys to train an ensemble of deep learning models for creating a single homogeneous survey of EUV images for 2 solar cycles.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extreme Ultraviolet images of the Sun are becoming an integral part of space
weather prediction tasks. However, having different surveys requires the
development of instrument-specific prediction algorithms. As an alternative, it
is possible to combine multiple surveys to create a homogeneous dataset. In
this study, we utilize the temporal overlap of SoHO/EIT and SDO/AIA 171~\AA
~surveys to train an ensemble of deep learning models for creating a single
homogeneous survey of EUV images for 2 solar cycles. Prior applications of deep
learning have focused on validating the homogeneity of the output while
overlooking the systematic estimation of uncertainty. We use an approach called
`Approximate Bayesian Ensembling' to generate an ensemble of models whose
uncertainty mimics that of a fully Bayesian neural network at a fraction of the
cost. We find that ensemble uncertainty goes down as the training set size
increases. Additionally, we show that the model ensemble adds immense value to
the prediction by showing higher uncertainty in test data that are not well
represented in the training data.
Related papers
- Joint Prediction Regions for time-series models [0.0]
It is an easy task to compute Joint Prediction regions (JPR) when the data is IID.
This project aims to implement Wolf and Wunderli's method for constructing JPRs and compare it with other methods.
arXiv Detail & Related papers (2024-05-14T02:38:49Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - SurfEmb: Dense and Continuous Correspondence Distributions for Object
Pose Estimation with Learnt Surface Embeddings [2.534402217750793]
We present an approach to learn dense, continuous 2D-3D correspondence distributions over the surface of objects from data.
We also present a new method for 6D pose estimation of rigid objects using the learnt distributions to sample, score and refine pose hypotheses.
arXiv Detail & Related papers (2021-11-26T13:39:38Z) - Unsupervised Scale-consistent Depth Learning from Video [131.3074342883371]
We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training.
Thanks to the capability of scale-consistent prediction, we show that our monocular-trained deep networks are readily integrated into the ORB-SLAM2 system.
The proposed hybrid Pseudo-RGBD SLAM shows compelling results in KITTI, and it generalizes well to the KAIST dataset without additional training.
arXiv Detail & Related papers (2021-05-25T02:17:56Z) - GRAFFL: Gradient-free Federated Learning of a Bayesian Generative Model [8.87104231451079]
This paper presents the first gradient-free federated learning framework called GRAFFL.
It uses implicit information derived from each participating institution to learn posterior distributions of parameters.
We propose the GRAFFL-based Bayesian mixture model to serve as a proof-of-concept of the framework.
arXiv Detail & Related papers (2020-08-29T07:19:44Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Uncertainty Estimation for End-To-End Learned Dense Stereo Matching via
Probabilistic Deep Learning [0.0]
A novel probabilistic neural network is presented for the task of joint depth and uncertainty estimation from epipolar rectified stereo image pairs.
The network learns a probability distribution from which parameters are sampled for every prediction.
The quality of the estimated depth and uncertainty information is assessed in an extensive evaluation on three different datasets.
arXiv Detail & Related papers (2020-02-10T11:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.