Noise-based Enhancement for Foveated Rendering
- URL: http://arxiv.org/abs/2204.04455v1
- Date: Sat, 9 Apr 2022 12:00:28 GMT
- Title: Noise-based Enhancement for Foveated Rendering
- Authors: Taimoor Tariq, Cara Tursun and Piotr Didyk
- Abstract summary: Novel image synthesis techniques, so-called foveated rendering, exploit this observation and reduce the spatial resolution of synthesized images for the periphery.
We demonstrate that this specific range of frequencies can be efficiently replaced with procedural noise.
Our main contribution is a perceptually-inspired technique for deriving the parameters of the noise required for the enhancement and its calibration.
- Score: 10.124827218817439
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human visual sensitivity to spatial details declines towards the periphery.
Novel image synthesis techniques, so-called foveated rendering, exploit this
observation and reduce the spatial resolution of synthesized images for the
periphery, avoiding the synthesis of high-spatial-frequency details that are
costly to generate but not perceived by a viewer. However, contemporary
techniques do not make a clear distinction between the range of spatial
frequencies that must be reproduced and those that can be omitted. For a given
eccentricity, there is a range of frequencies that are detectable but not
resolvable. While the accurate reproduction of these frequencies is not
required, an observer can detect their absence if completely omitted. We use
this observation to improve the performance of existing foveated rendering
techniques. We demonstrate that this specific range of frequencies can be
efficiently replaced with procedural noise whose parameters are carefully tuned
to image content and human perception. Consequently, these frequencies do not
have to be synthesized during rendering, allowing more aggressive foveation,
and they can be replaced by noise generated in a less expensive post-processing
step, leading to improved performance of the rendering system. Our main
contribution is a perceptually-inspired technique for deriving the parameters
of the noise required for the enhancement and its calibration. The method
operates on rendering output and runs at rates exceeding 200FPS at 4K
resolution, making it suitable for integration with real-time foveated
rendering systems for VR and AR devices. We validate our results and compare
them to the existing contrast enhancement technique in user experiments.
Related papers
- Learning Multi-scale Spatial-frequency Features for Image Denoising [58.883244886588336]
We propose a novel multi-scale adaptive dual-domain network (MADNet) for image denoising.<n>We use image pyramid inputs to restore noise-free results from low-resolution images.<n>In order to realize the interaction of high-frequency and low-frequency information, we design an adaptive spatial-frequency learning unit.
arXiv Detail & Related papers (2025-06-19T13:28:09Z) - Freqformer: Image-Demoiréing Transformer via Efficient Frequency Decomposition [83.40450475728792]
We present Freqformer, a Transformer-based framework specifically designed for image demoir'eing through targeted frequency separation.<n>Our method performs an effective frequency decomposition that explicitly splits moir'e patterns into high-frequency spatially-localized textures and low-frequency scale-robust color distortions.<n>Experiments on various demoir'eing benchmarks demonstrate that Freqformer achieves state-of-the-art performance with a compact model size.
arXiv Detail & Related papers (2025-05-25T12:23:10Z) - Efficient and Robust Remote Sensing Image Denoising Using Randomized Approximation of Geodesics' Gramian on the Manifold Underlying the Patch Space [2.56711111236449]
We present a robust remote sensing image denoising method that doesn't require additional training samples.
The method asserts a unique emphasis on each color channel during denoising so the three denoised channels are merged to produce the final image.
arXiv Detail & Related papers (2025-04-15T02:46:05Z) - Explainable Synthetic Image Detection through Diffusion Timestep Ensembling [30.298198387824275]
Recent advances in diffusion models have enabled the creation of deceptively real images.
Recent advances in diffusion models have enabled the creation of deceptively real images, posing significant security risks when misused.
arXiv Detail & Related papers (2025-03-08T13:04:20Z) - FreqINR: Frequency Consistency for Implicit Neural Representation with Adaptive DCT Frequency Loss [5.349799154834945]
This paper introduces Frequency Consistency for Implicit Neural Representation (FreqINR), an innovative Arbitrary-scale Super-resolution method.
During training, we employ Adaptive Discrete Cosine Transform Frequency Loss (ADFL) to minimize the frequency gap between HR and ground-truth images.
During inference, we extend the receptive field to preserve spectral coherence between low-resolution (LR) and ground-truth images.
arXiv Detail & Related papers (2024-08-25T03:53:17Z) - WaveDH: Wavelet Sub-bands Guided ConvNet for Efficient Image Dehazing [20.094839751816806]
We introduce WaveDH, a novel and compact ConvNet designed to address this efficiency gap in image dehazing.
Our WaveDH leverages wavelet sub-bands for guided up-and-downsampling and frequency-aware feature refinement.
Our method, WaveDH, outperforms many state-of-the-art methods on several image dehazing benchmarks with significantly reduced computational costs.
arXiv Detail & Related papers (2024-04-02T02:52:05Z) - Denoising Monte Carlo Renders with Diffusion Models [5.228564799458042]
Physically-based renderings contain Monte-Carlo noise, with variance that increases as the number of rays per pixel decreases.
This noise, while zero-mean for good moderns, can have heavy tails.
We demonstrate that a diffusion model can denoise low fidelity renders successfully.
arXiv Detail & Related papers (2024-03-30T23:19:40Z) - Reconstruct-and-Generate Diffusion Model for Detail-Preserving Image
Denoising [16.43285056788183]
We propose a novel approach called the Reconstruct-and-Generate Diffusion Model (RnG)
Our method leverages a reconstructive denoising network to recover the majority of the underlying clean signal.
It employs a diffusion algorithm to generate residual high-frequency details, thereby enhancing visual quality.
arXiv Detail & Related papers (2023-09-19T16:01:20Z) - Spectral Enhanced Rectangle Transformer for Hyperspectral Image
Denoising [64.11157141177208]
We propose a spectral enhanced rectangle Transformer to model the spatial and spectral correlation in hyperspectral images.
For the former, we exploit the rectangle self-attention horizontally and vertically to capture the non-local similarity in the spatial domain.
For the latter, we design a spectral enhancement module that is capable of extracting global underlying low-rank property of spatial-spectral cubes to suppress noise.
arXiv Detail & Related papers (2023-04-03T09:42:13Z) - Harnessing Low-Frequency Neural Fields for Few-Shot View Synthesis [82.31272171857623]
We harness low-frequency neural fields to regularize high-frequency neural fields from overfitting.
We propose a simple-yet-effective strategy for tuning the frequency to avoid overfitting few-shot inputs.
arXiv Detail & Related papers (2023-03-15T05:15:21Z) - Representing Noisy Image Without Denoising [91.73819173191076]
Fractional-order Moments in Radon space (FMR) is designed to derive robust representation directly from noisy images.
Unlike earlier integer-order methods, our work is a more generic design taking such classical methods as special cases.
arXiv Detail & Related papers (2023-01-18T10:13:29Z) - SAR Despeckling using a Denoising Diffusion Probabilistic Model [52.25981472415249]
The presence of speckle degrades the image quality and adversely affects the performance of SAR image understanding applications.
We introduce SAR-DDPM, a denoising diffusion probabilistic model for SAR despeckling.
The proposed method achieves significant improvements in both quantitative and qualitative results over the state-of-the-art despeckling methods.
arXiv Detail & Related papers (2022-06-09T14:00:26Z) - Exploring Inter-frequency Guidance of Image for Lightweight Gaussian
Denoising [1.52292571922932]
We propose a novel network architecture denoted as IGNet, in order to refine the frequency bands from low to high in a progressive manner.
With this design, more inter-frequency prior and information are utilized, thus the model size can be lightened while still perserves competitive results.
arXiv Detail & Related papers (2021-12-22T10:35:53Z) - Designing a Practical Degradation Model for Deep Blind Image
Super-Resolution [134.9023380383406]
Single image super-resolution (SISR) methods would not perform well if the assumed degradation model deviates from those in real images.
This paper proposes to design a more complex but practical degradation model that consists of randomly shuffled blur, downsampling and noise degradations.
arXiv Detail & Related papers (2021-03-25T17:40:53Z) - Focal Frequency Loss for Image Reconstruction and Synthesis [125.7135706352493]
We show that narrowing gaps in the frequency domain can ameliorate image reconstruction and synthesis quality further.
We propose a novel focal frequency loss, which allows a model to adaptively focus on frequency components that are hard to synthesize.
arXiv Detail & Related papers (2020-12-23T17:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.