Revealing the 3D Cosmic Web through Gravitationally Constrained Neural Fields
- URL: http://arxiv.org/abs/2504.15262v1
- Date: Mon, 21 Apr 2025 17:43:21 GMT
- Title: Revealing the 3D Cosmic Web through Gravitationally Constrained Neural Fields
- Authors: Brandon Zhao, Aviad Levis, Liam Connor, Pratul P. Srinivasan, Katherine L. Bouman,
- Abstract summary: Weak gravitational lensing is the slight distortion of galaxy shapes caused primarily by the gravitational effects of dark matter in the universe.<n>We seek to invert the weak lensing signal from 2D telescope images to reconstruct a 3D map of the universe's dark matter field.<n>We propose a methodology using a gravitationally-constrained neural field to flexibly model the continuous matter distribution.
- Score: 15.645523903662033
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Weak gravitational lensing is the slight distortion of galaxy shapes caused primarily by the gravitational effects of dark matter in the universe. In our work, we seek to invert the weak lensing signal from 2D telescope images to reconstruct a 3D map of the universe's dark matter field. While inversion typically yields a 2D projection of the dark matter field, accurate 3D maps of the dark matter distribution are essential for localizing structures of interest and testing theories of our universe. However, 3D inversion poses significant challenges. First, unlike standard 3D reconstruction that relies on multiple viewpoints, in this case, images are only observed from a single viewpoint. This challenge can be partially addressed by observing how galaxy emitters throughout the volume are lensed. However, this leads to the second challenge: the shapes and exact locations of unlensed galaxies are unknown, and can only be estimated with a very large degree of uncertainty. This introduces an overwhelming amount of noise which nearly drowns out the lensing signal completely. Previous approaches tackle this by imposing strong assumptions about the structures in the volume. We instead propose a methodology using a gravitationally-constrained neural field to flexibly model the continuous matter distribution. We take an analysis-by-synthesis approach, optimizing the weights of the neural network through a fully differentiable physical forward model to reproduce the lensing signal present in image measurements. We showcase our method on simulations, including realistic simulated measurements of dark matter distributions that mimic data from upcoming telescope surveys. Our results show that our method can not only outperform previous methods, but importantly is also able to recover potentially surprising dark matter structures.
Related papers
- A deep-learning algorithm to disentangle self-interacting dark matter and AGN feedback models [0.0]
We present a Machine Learning method that ''learns'' how the impact of dark matter self-interactions differs from that of astrophysical feedback.
We train a Convolutional Neural Network on images of galaxy clusters from hydro-dynamic simulations.
arXiv Detail & Related papers (2024-05-27T18:00:49Z) - Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748]
This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system.
arXiv Detail & Related papers (2023-09-28T17:59:51Z) - Single View Refractive Index Tomography with Neural Fields [16.578244661163513]
We introduce a method that leverages prior knowledge of light sources scattered throughout the refractive medium to help disambiguate the single-view refractive index tomography problem.
We demonstrate the efficacy of our approach by reconstructing simulated refractive fields, analyze the effects of light source distribution on the recovered field, and test our method on a simulated dark matter mapping problem.
arXiv Detail & Related papers (2023-09-08T17:01:34Z) - Monocular 3D Object Detection with Depth from Motion [74.29588921594853]
We take advantage of camera ego-motion for accurate object depth estimation and detection.
Our framework, named Depth from Motion (DfM), then uses the established geometry to lift 2D image features to the 3D space and detects 3D objects thereon.
Our framework outperforms state-of-the-art methods by a large margin on the KITTI benchmark.
arXiv Detail & Related papers (2022-07-26T15:48:46Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - Strong Lensing Source Reconstruction Using Continuous Neural Fields [3.604982738232833]
We introduce a method that uses continuous neural fields to non-parametrically reconstruct the complex morphology of a source galaxy.
We demonstrate the efficacy of our method through experiments on simulated data targeting high-resolution lensing images.
arXiv Detail & Related papers (2022-06-29T18:00:01Z) - Shadows Shed Light on 3D Objects [23.14510850163136]
We create a differentiable image formation model that allows us to infer the 3D shape of an object, its pose, and the position of a light source.
Our approach is robust to real-world images where ground-truth shadow mask is unknown.
arXiv Detail & Related papers (2022-06-17T19:58:11Z) - 3D Magic Mirror: Clothing Reconstruction from a Single Image via a
Causal Perspective [96.65476492200648]
This research aims to study a self-supervised 3D clothing reconstruction method.
It recovers the geometry shape, and texture of human clothing from a single 2D image.
arXiv Detail & Related papers (2022-04-27T17:46:55Z) - Gravitationally Lensed Black Hole Emission Tomography [21.663531093434127]
We propose BH-NeRF, a novel tomography approach that leverages gravitational lensing to recover the continuous 3D emission field near a black hole.
Our method captures the unknown emission field using a continuous volumetric function parameterized by a coordinate-based neural network.
This work takes the first steps in showing how future measurements from the Event Horizon Telescope could be used to recover evolving 3D emission around the supermassive black hole in our Galactic center.
arXiv Detail & Related papers (2022-04-07T20:09:51Z) - Neural Reflectance for Shape Recovery with Shadow Handling [88.67603644930466]
This paper aims at recovering the shape of a scene with unknown, non-Lambertian, and possibly spatially-varying surface materials.
We propose a coordinate-based deep reflectance (multilayer perceptron) to parameterize both the unknown 3D shape and the unknown at every surface point.
This network is able to leverage the observed photometric variance and shadows on the surface, and recover both surface shape and general non-Lambertian reflectance.
arXiv Detail & Related papers (2022-03-24T07:57:20Z) - {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model [69.27632025495512]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
arXiv Detail & Related papers (2022-03-22T17:59:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.