PolarRec: Radio Interferometric Data Reconstruction with Polar
Coordinate Representation
- URL: http://arxiv.org/abs/2308.14610v2
- Date: Mon, 27 Nov 2023 12:29:17 GMT
- Title: PolarRec: Radio Interferometric Data Reconstruction with Polar
Coordinate Representation
- Authors: Ruoqi Wang, Zhuoyang Chen, Jiayi Zhu, Qiong Luo, Feng Wang
- Abstract summary: In radio astronomy, visibility data, which are measurements of wave signals from radio telescopes, are transformed into images for observation of distant celestial objects.
Existing reconstruction methods often miss some components of visibility in frequency domain, so blurred object edges and persistent artifacts remain in the images.
We propose PolarRec, a transformer-encoder-conditioned reconstruction pipeline with visibility samples converted into the polar coordinate representation.
- Score: 4.941073370898513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In radio astronomy, visibility data, which are measurements of wave signals
from radio telescopes, are transformed into images for observation of distant
celestial objects. However, these resultant images usually contain both real
sources and artifacts, due to signal sparsity and other factors. One way to
obtain cleaner images is to reconstruct samples into dense forms before
imaging. Unfortunately, existing reconstruction methods often miss some
components of visibility in frequency domain, so blurred object edges and
persistent artifacts remain in the images. Furthermore, the computation
overhead is high on irregular visibility samples due to the data skew. To
address these problems, we propose PolarRec, a transformer-encoder-conditioned
reconstruction pipeline with visibility samples converted into the polar
coordinate representation. This representation matches the way in which radio
telescopes observe a celestial area as the Earth rotates. As a result,
visibility samples distribute in the polar system more uniformly than in the
Cartesian space. Therefore, we propose to use radial distance in the loss
function, to help reconstruct complete visibility effectively. Also, we group
visibility samples by their polar angles and propose a group-based encoding
scheme to improve the efficiency. Our experiments demonstrate that PolarRec
markedly improves imaging results by faithfully reconstructing all frequency
components in the visibility domain while significantly reducing the
computation cost in visibility data encoding. We believe this high-quality and
high-efficiency imaging of PolarRec will better facilitate astronomers to
conduct their research.
Related papers
- RadioFormer: A Multiple-Granularity Radio Map Estimation Transformer with 1\textpertenthousand Spatial Sampling [60.267226205350596]
Radio map estimation aims to generate a dense representation of electromagnetic spectrum quantities.
We propose RadioFormer, a novel multiple-granularity transformer to handle the constraints posed by spatial sparse observations.
We show that RadioFormer outperforms state-of-the-art methods in radio map estimation while maintaining the lowest computational cost.
arXiv Detail & Related papers (2025-04-27T08:44:41Z) - AerialMegaDepth: Learning Aerial-Ground Reconstruction and View Synthesis [57.249817395828174]
We propose a scalable framework combining pseudo-synthetic renderings from 3D city-wide meshes with real, ground-level crowd-sourced images.
The pseudo-synthetic data simulates a wide range of aerial viewpoints, while the real, crowd-sourced images help improve visual fidelity for ground-level images.
Using this hybrid dataset, we fine-tune several state-of-the-art algorithms and achieve significant improvements on real-world, zero-shot aerial-ground tasks.
arXiv Detail & Related papers (2025-04-17T17:57:05Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - Perception of Misalignment States for Sky Survey Telescopes with the
Digital Twin and the Deep Neural Networks [16.245776159991294]
We propose a deep neural network to extract misalignment states from continuously varying point spread functions in different field of views.
We store misalignment data and explore complex relationships between misalignment states and corresponding point spread functions.
The method could be used to provide prior information for the active optics system and the optical system alignment.
arXiv Detail & Related papers (2023-11-30T03:16:27Z) - Deep learning-based deconvolution for interferometric radio transient
reconstruction [0.39259415717754914]
Radio astronomy facilities like LOFAR, MeerKAT/SKA, ASKAP/SKA, and the future SKA-LOW bring tremendous sensitivity in time and frequency.
These facilities enable advanced studies of radio transients, volatile by nature, that can be detected or missed in the data.
These transients are markers of high-energy accelerations of electrons and manifest in a wide range of temporal scales.
arXiv Detail & Related papers (2023-06-24T08:58:52Z) - A Conditional Denoising Diffusion Probabilistic Model for Radio
Interferometric Image Reconstruction [4.715025376297672]
We present VIC-DDPM, a Visibility and Image Conditioned Denoising Diffusion Probabilistic Model.
Our main idea is to use both the original visibility data in the spectral domain and dirty images in the spatial domain to guide the image generation process with DDPM.
Our results show that our method significantly improves the resulting images by reducing artifacts, preserving fine details, and recovering dim sources.
arXiv Detail & Related papers (2023-05-16T03:00:04Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Factor Fields: A Unified Framework for Neural Fields and Beyond [50.29013417187368]
We present Factor Fields, a novel framework for modeling and representing signals.
Our framework accommodates several recent signal representations including NeRF, Plenoxels, EG3D, Instant-NGP, and TensoRF.
Our representation achieves better image approximation quality on 2D image regression tasks, higher geometric quality when reconstructing 3D signed distance fields, and higher compactness for radiance field reconstruction tasks.
arXiv Detail & Related papers (2023-02-02T17:06:50Z) - Polarimetric Inverse Rendering for Transparent Shapes Reconstruction [1.807492010338763]
We propose a novel method for the detailed reconstruction of transparent objects by exploiting polarimetric cues.
We implicitly represent the object's geometry as a neural network, while the polarization render is capable of rendering the object's polarization images.
We build a polarization dataset for multi-view transparent shapes reconstruction to verify our method.
arXiv Detail & Related papers (2022-08-25T02:52:31Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Hyperspectral and multispectral image fusion under spectrally varying
spatial blurs -- Application to high dimensional infrared astronomical
imaging [11.243400478302767]
We propose a data fusion method which combines the benefits of each image to recover a high-spectral resolution data variant.
We conduct experiments on a realistic synthetic dataset of simulated observation of the upcoming James Webb Space Telescope.
arXiv Detail & Related papers (2019-12-26T13:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.