Domain-Adaptive Neural Posterior Estimation for Strong Gravitational Lens Analysis
- URL: http://arxiv.org/abs/2410.16347v1
- Date: Mon, 21 Oct 2024 14:12:39 GMT
- Title: Domain-Adaptive Neural Posterior Estimation for Strong Gravitational Lens Analysis
- Authors: Paxson Swierc, Marcos Tamargo-Arizmendi, Aleksandra Ćiprijanović, Brian D. Nord,
- Abstract summary: We study the efficacy of Neural posterior estimation (NPE) in combination with unsupervised domain adaptation (UDA)
We find that combining UDA and NPE improves the accuracy of the inference by 1-2 orders of magnitude.
We anticipate that this combination of approaches will help enable future applications of NPE models to real observational data.
- Score: 41.94295877935867
- License:
- Abstract: Modeling strong gravitational lenses is prohibitively expensive for modern and next-generation cosmic survey data. Neural posterior estimation (NPE), a simulation-based inference (SBI) approach, has been studied as an avenue for efficient analysis of strong lensing data. However, NPE has not been demonstrated to perform well on out-of-domain target data -- e.g., when trained on simulated data and then applied to real, observational data. In this work, we perform the first study of the efficacy of NPE in combination with unsupervised domain adaptation (UDA). The source domain is noiseless, and the target domain has noise mimicking modern cosmology surveys. We find that combining UDA and NPE improves the accuracy of the inference by 1-2 orders of magnitude and significantly improves the posterior coverage over an NPE model without UDA. We anticipate that this combination of approaches will help enable future applications of NPE models to real observational data.
Related papers
- Neural Network Prediction of Strong Lensing Systems with Domain Adaptation and Uncertainty Quantification [44.99833362998488]
Mean-variance Estimators (MVEs) are a common approach for obtaining aleatoric (data) uncertainties from a neural network prediction.
In this work, we perform the first study of the efficacy of MVEs in combination with unsupervised domain adaptation (UDA) on strong lensing data.
We find that adding UDA to MVE increases the accuracy on the target data by a factor of about two over an MVE model without UDA.
arXiv Detail & Related papers (2024-10-23T19:56:57Z) - Optimizing cnn-Bigru performance: Mish activation and comparative analysis with Relu [0.0]
Activation functions (AF) are fundamental components within neural networks, enabling them to capture complex patterns and relationships in the data.
This study illuminates the effectiveness of AF in elevating the performance of intrusion detection systems.
arXiv Detail & Related papers (2024-05-30T21:48:56Z) - Domain Adaptive Graph Neural Networks for Constraining Cosmological Parameters Across Multiple Data Sets [40.19690479537335]
We show that DA-GNN achieves higher accuracy and robustness on cross-dataset tasks.
This shows that DA-GNNs are a promising method for extracting domain-independent cosmological information.
arXiv Detail & Related papers (2023-11-02T20:40:21Z) - Physics Inspired Hybrid Attention for SAR Target Recognition [61.01086031364307]
We propose a physics inspired hybrid attention (PIHA) mechanism and the once-for-all (OFA) evaluation protocol to address the issues.
PIHA leverages the high-level semantics of physical information to activate and guide the feature group aware of local semantics of target.
Our method outperforms other state-of-the-art approaches in 12 test scenarios with same ASC parameters.
arXiv Detail & Related papers (2023-09-27T14:39:41Z) - Re-thinking Data Availablity Attacks Against Deep Neural Networks [53.64624167867274]
In this paper, we re-examine the concept of unlearnable examples and discern that the existing robust error-minimizing noise presents an inaccurate optimization objective.
We introduce a novel optimization paradigm that yields improved protection results with reduced computational time requirements.
arXiv Detail & Related papers (2023-05-18T04:03:51Z) - Causal Reasoning in the Presence of Latent Confounders via Neural ADMG
Learning [8.649109147825985]
Latent confounding has been a long-standing obstacle for causal reasoning from observational data.
We propose a novel neural causal model based on autoregressive flows for ADMG learning.
arXiv Detail & Related papers (2023-03-22T16:45:54Z) - Pre-training via Denoising for Molecular Property Prediction [53.409242538744444]
We describe a pre-training technique that utilizes large datasets of 3D molecular structures at equilibrium.
Inspired by recent advances in noise regularization, our pre-training objective is based on denoising.
arXiv Detail & Related papers (2022-05-31T22:28:34Z) - DeepMerge II: Building Robust Deep Learning Algorithms for Merging
Galaxy Identification Across Domains [0.0]
In astronomy, neural networks are often trained on simulation data with the prospect of being used on telescope observations.
We show that the addition of each domain adaptation technique improves the performance of a classifier when compared to conventional deep learning algorithms.
We demonstrate this on two examples: between two Illustris-1 simulated datasets of distant merging galaxies, and between Illustris-1 simulated data of nearby merging galaxies and observed data from the Sloan Digital Sky Survey.
arXiv Detail & Related papers (2021-03-02T00:24:10Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.