Learning the Night Sky with Deep Generative Priors
- URL: http://arxiv.org/abs/2302.02030v1
- Date: Fri, 3 Feb 2023 23:28:23 GMT
- Title: Learning the Night Sky with Deep Generative Priors
- Authors: Fausto Navarro, Daniel Hall, Tamas Budavari, Yashil Sukurdeep
- Abstract summary: We develop an unsupervised multi-frame method for denoising, deblurring, and coadding images inspired by deep generative priors.
We analyze 4K by 4K Hyper Suprime-Cam exposures and obtain preliminary results which yield promising restored images and extracted source lists.
- Score: 1.1470070927586016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recovering sharper images from blurred observations, referred to as
deconvolution, is an ill-posed problem where classical approaches often produce
unsatisfactory results. In ground-based astronomy, combining multiple exposures
to achieve images with higher signal-to-noise ratios is complicated by the
variation of point-spread functions across exposures due to atmospheric
effects. We develop an unsupervised multi-frame method for denoising,
deblurring, and coadding images inspired by deep generative priors. We use a
carefully chosen convolutional neural network architecture that combines
information from multiple observations, regularizes the joint likelihood over
these observations, and allows us to impose desired constraints, such as
non-negativity of pixel values in the sharp, restored image. With an eye
towards the Rubin Observatory, we analyze 4K by 4K Hyper Suprime-Cam exposures
and obtain preliminary results which yield promising restored images and
extracted source lists.
Related papers
- Efficient and Robust Remote Sensing Image Denoising Using Randomized Approximation of Geodesics' Gramian on the Manifold Underlying the Patch Space [2.56711111236449]
We present a robust remote sensing image denoising method that doesn't require additional training samples.
The method asserts a unique emphasis on each color channel during denoising so the three denoised channels are merged to produce the final image.
arXiv Detail & Related papers (2025-04-15T02:46:05Z) - AstroClearNet: Deep image prior for multi-frame astronomical image restoration [1.2289361708127877]
Ground-based astronomy combines multiple exposures to enhance signal-to-noise ratios.
We present a self-supervised multi-frame method, based on deep image priors, for denoising, deblurring, and coadding ground-based exposures.
We demonstrate the method's potential by processing Hyper Suprime-Cam exposures, yielding promising preliminary results with sharper restored images.
arXiv Detail & Related papers (2025-04-08T22:07:00Z) - Modeling Dual-Exposure Quad-Bayer Patterns for Joint Denoising and Deblurring [22.82877719326985]
Single-image solutions face an inherent tradeoff between noise reduction and motion blur.
We propose a physical-model-based image restoration approach leveraging a novel dual-exposure Quad-Bayer pattern sensor.
We design a hierarchical convolutional neural network called QRNet to recover high-quality RGB images.
arXiv Detail & Related papers (2024-12-10T07:35:26Z) - Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - RANRAC: Robust Neural Scene Representations via Random Ray Consensus [12.161889666145127]
RANdom RAy Consensus (RANRAC) is an efficient approach to eliminate the effect of inconsistent data.
We formulate a fuzzy adaption of the RANSAC paradigm, enabling its application to large scale models.
Results indicate significant improvements compared to state-of-the-art robust methods for novel-view synthesis.
arXiv Detail & Related papers (2023-12-15T13:33:09Z) - Seeing Behind Dynamic Occlusions with Event Cameras [44.63007080623054]
We propose a novel approach to reconstruct the background from a single viewpoint.
Our solution relies for the first time on the combination of a traditional camera with an event camera.
We show that our method outperforms image inpainting methods by 3dB in terms of PSNR on our dataset.
arXiv Detail & Related papers (2023-07-28T22:20:52Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - Handheld Burst Super-Resolution Meets Multi-Exposure Satellite Imagery [7.9716992946722804]
We adapt a state-of-the-art kernel regression technique for smartphone camera burst super-resolution to satellites.
We leverage the local structure of the image to optimally steer the fusion kernels, limiting blur in the final high-resolution prediction.
We extend this approach to predict from a sequence of multi-exposure low-resolution frames a high-resolution and noise-free one.
arXiv Detail & Related papers (2023-03-10T12:13:31Z) - Robustifying the Multi-Scale Representation of Neural Radiance Fields [86.69338893753886]
We present a robust multi-scale neural radiance fields representation approach to overcome both real-world imaging issues.
Our method handles multi-scale imaging effects and camera-pose estimation problems with NeRF-inspired approaches.
We demonstrate, with examples, that for an accurate neural representation of an object from day-to-day acquired multi-view images, it is crucial to have precise camera-pose estimates.
arXiv Detail & Related papers (2022-10-09T11:46:45Z) - Riesz-Quincunx-UNet Variational Auto-Encoder for Satellite Image
Denoising [0.0]
We introduce a hybrid RQUNet-VAE scheme for image and time series decomposition used to reduce noise in satellite imagery.
We also apply our scheme to several applications for multi-band satellite images, including: image denoising, image and time-series decomposition by diffusion and image segmentation.
arXiv Detail & Related papers (2022-08-25T19:51:07Z) - Low-Light Image Enhancement with Normalizing Flow [92.52290821418778]
In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model.
An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution.
The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
arXiv Detail & Related papers (2021-09-13T12:45:08Z) - Reconstructing the Noise Manifold for Image Denoising [56.562855317536396]
We introduce the idea of a cGAN which explicitly leverages structure in the image noise space.
By learning directly a low dimensional manifold of the image noise, the generator promotes the removal from the noisy image only that information which spans this manifold.
Based on our experiments, our model substantially outperforms existing state-of-the-art architectures.
arXiv Detail & Related papers (2020-02-11T00:31:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.