Pixelated Reconstruction of Foreground Density and Background Surface
Brightness in Gravitational Lensing Systems using Recurrent Inference
Machines
- URL: http://arxiv.org/abs/2301.04168v2
- Date: Mon, 24 Apr 2023 14:57:12 GMT
- Title: Pixelated Reconstruction of Foreground Density and Background Surface
Brightness in Gravitational Lensing Systems using Recurrent Inference
Machines
- Authors: Alexandre Adam, Laurence Perreault-Levasseur, Yashar Hezaveh and Max
Welling
- Abstract summary: We use a neural network based on the Recurrent Inference Machine to reconstruct an undistorted image of the background source and the lens mass density distribution as pixelated maps.
When compared to more traditional parametric models, the proposed method is significantly more expressive and can reconstruct complex mass distributions.
- Score: 116.33694183176617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling strong gravitational lenses in order to quantify the distortions in
the images of background sources and to reconstruct the mass density in the
foreground lenses has been a difficult computational challenge. As the quality
of gravitational lens images increases, the task of fully exploiting the
information they contain becomes computationally and algorithmically more
difficult. In this work, we use a neural network based on the Recurrent
Inference Machine (RIM) to simultaneously reconstruct an undistorted image of
the background source and the lens mass density distribution as pixelated maps.
The method iteratively reconstructs the model parameters (the image of the
source and a pixelated density map) by learning the process of optimizing the
likelihood given the data using the physical model (a ray-tracing simulation),
regularized by a prior implicitly learned by the neural network through its
training data. When compared to more traditional parametric models, the
proposed method is significantly more expressive and can reconstruct complex
mass distributions, which we demonstrate by using realistic lensing galaxies
taken from the IllustrisTNG cosmological hydrodynamic simulation.
Related papers
- Deep Learning based Optical Image Super-Resolution via Generative Diffusion Models for Layerwise in-situ LPBF Monitoring [4.667646675144656]
We implement generative deep learning models to link low-cost, low-resolution images of the build plate to detailed high-resolution optical images of the build plate.
A conditional latent probabilistic diffusion model is trained to produce realistic high-resolution images of the build plate from low-resolution webcam images.
We also design a framework to recreate the 3D morphology of the printed part and analyze the surface roughness of the reconstructed samples.
arXiv Detail & Related papers (2024-09-20T02:59:25Z) - Space-Variant Total Variation boosted by learning techniques in few-view tomographic imaging [0.0]
This paper focuses on the development of a space-variant regularization model for solving an under-determined linear inverse problem.
The primary objective of the proposed model is to achieve a good balance between denoising and the preservation of fine details and edges.
A convolutional neural network is designed, to approximate both the ground truth image and its gradient using an elastic loss function in its training.
arXiv Detail & Related papers (2024-04-25T08:58:41Z) - SphereDiffusion: Spherical Geometry-Aware Distortion Resilient Diffusion Model [63.685132323224124]
Controllable spherical panoramic image generation holds substantial applicative potential across a variety of domains.
In this paper, we introduce a novel framework of SphereDiffusion to address these unique challenges.
Experiments on Structured3D dataset show that SphereDiffusion significantly improves the quality of controllable spherical image generation and relatively reduces around 35% FID on average.
arXiv Detail & Related papers (2024-03-15T06:26:46Z) - Latent Diffusion Prior Enhanced Deep Unfolding for Snapshot Spectral Compressive Imaging [17.511583657111792]
Snapshot spectral imaging reconstruction aims to reconstruct three-dimensional spatial-spectral images from a single-shot two-dimensional compressed measurement.
We introduce a generative model, namely the latent diffusion model (LDM), to generate degradation-free prior to deep unfolding method.
arXiv Detail & Related papers (2023-11-24T04:55:20Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Physics-Driven Learning of Wasserstein GAN for Density Reconstruction in
Dynamic Tomography [4.970364068620608]
In this work, we demonstrate the ability of learned deep neural networks to perform artifact removal in noisy density reconstructions.
We use a Wasserstein generative adversarial network (WGAN), where the generator serves as a denoiser that removes artifacts in densities obtained from traditional reconstruction algorithms.
Preliminary numerical results show that the models trained in our frameworks can remove significant portions of unknown noise in density time-series data.
arXiv Detail & Related papers (2021-10-28T20:23:06Z) - Deep Unrolled Recovery in Sparse Biological Imaging [62.997667081978825]
Deep algorithm unrolling is a model-based approach to develop deep architectures that combine the interpretability of iterative algorithms with the performance gains of supervised deep learning.
This framework is well-suited to applications in biological imaging, where physics-based models exist to describe the measurement process and the information to be recovered is often highly structured.
arXiv Detail & Related papers (2021-09-28T20:22:44Z) - Uncalibrated Neural Inverse Rendering for Photometric Stereo of General
Surfaces [103.08512487830669]
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem.
Existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both.
We propose an uncalibrated neural inverse rendering approach to this problem.
arXiv Detail & Related papers (2020-12-12T10:33:08Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.