Noise-based Enhancement for Foveated Rendering
- URL: http://arxiv.org/abs/2204.04455v1
- Date: Sat, 9 Apr 2022 12:00:28 GMT
- Title: Noise-based Enhancement for Foveated Rendering
- Authors: Taimoor Tariq, Cara Tursun and Piotr Didyk
- Abstract summary: Novel image synthesis techniques, so-called foveated rendering, exploit this observation and reduce the spatial resolution of synthesized images for the periphery.
We demonstrate that this specific range of frequencies can be efficiently replaced with procedural noise.
Our main contribution is a perceptually-inspired technique for deriving the parameters of the noise required for the enhancement and its calibration.
- Score: 10.124827218817439
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human visual sensitivity to spatial details declines towards the periphery.
Novel image synthesis techniques, so-called foveated rendering, exploit this
observation and reduce the spatial resolution of synthesized images for the
periphery, avoiding the synthesis of high-spatial-frequency details that are
costly to generate but not perceived by a viewer. However, contemporary
techniques do not make a clear distinction between the range of spatial
frequencies that must be reproduced and those that can be omitted. For a given
eccentricity, there is a range of frequencies that are detectable but not
resolvable. While the accurate reproduction of these frequencies is not
required, an observer can detect their absence if completely omitted. We use
this observation to improve the performance of existing foveated rendering
techniques. We demonstrate that this specific range of frequencies can be
efficiently replaced with procedural noise whose parameters are carefully tuned
to image content and human perception. Consequently, these frequencies do not
have to be synthesized during rendering, allowing more aggressive foveation,
and they can be replaced by noise generated in a less expensive post-processing
step, leading to improved performance of the rendering system. Our main
contribution is a perceptually-inspired technique for deriving the parameters
of the noise required for the enhancement and its calibration. The method
operates on rendering output and runs at rates exceeding 200FPS at 4K
resolution, making it suitable for integration with real-time foveated
rendering systems for VR and AR devices. We validate our results and compare
them to the existing contrast enhancement technique in user experiments.
Related papers
- HFGS: 4D Gaussian Splatting with Emphasis on Spatial and Temporal High-Frequency Components for Endoscopic Scene Reconstruction [13.012536387221669]
Robot-assisted minimally invasive surgery benefits from enhancing dynamic scene reconstruction, as it improves surgical outcomes.
NeRF have been effective in scene reconstruction, but their slow inference speeds and lengthy training durations limit their applicability.
3D Gaussian Splatting (3D-GS) based methods have emerged as a recent trend, offering rapid inference capabilities and superior 3D quality.
We propose HFGS, a novel approach for deformable endoscopic reconstruction that addresses these challenges from spatial and temporal frequency perspectives.
arXiv Detail & Related papers (2024-05-28T06:48:02Z) - WaveDH: Wavelet Sub-bands Guided ConvNet for Efficient Image Dehazing [20.094839751816806]
We introduce WaveDH, a novel and compact ConvNet designed to address this efficiency gap in image dehazing.
Our WaveDH leverages wavelet sub-bands for guided up-and-downsampling and frequency-aware feature refinement.
Our method, WaveDH, outperforms many state-of-the-art methods on several image dehazing benchmarks with significantly reduced computational costs.
arXiv Detail & Related papers (2024-04-02T02:52:05Z) - Reconstruct-and-Generate Diffusion Model for Detail-Preserving Image
Denoising [16.43285056788183]
We propose a novel approach called the Reconstruct-and-Generate Diffusion Model (RnG)
Our method leverages a reconstructive denoising network to recover the majority of the underlying clean signal.
It employs a diffusion algorithm to generate residual high-frequency details, thereby enhancing visual quality.
arXiv Detail & Related papers (2023-09-19T16:01:20Z) - Spectral Enhanced Rectangle Transformer for Hyperspectral Image
Denoising [64.11157141177208]
We propose a spectral enhanced rectangle Transformer to model the spatial and spectral correlation in hyperspectral images.
For the former, we exploit the rectangle self-attention horizontally and vertically to capture the non-local similarity in the spatial domain.
For the latter, we design a spectral enhancement module that is capable of extracting global underlying low-rank property of spatial-spectral cubes to suppress noise.
arXiv Detail & Related papers (2023-04-03T09:42:13Z) - Harnessing Low-Frequency Neural Fields for Few-Shot View Synthesis [82.31272171857623]
We harness low-frequency neural fields to regularize high-frequency neural fields from overfitting.
We propose a simple-yet-effective strategy for tuning the frequency to avoid overfitting few-shot inputs.
arXiv Detail & Related papers (2023-03-15T05:15:21Z) - Representing Noisy Image Without Denoising [91.73819173191076]
Fractional-order Moments in Radon space (FMR) is designed to derive robust representation directly from noisy images.
Unlike earlier integer-order methods, our work is a more generic design taking such classical methods as special cases.
arXiv Detail & Related papers (2023-01-18T10:13:29Z) - SAR Despeckling using a Denoising Diffusion Probabilistic Model [52.25981472415249]
The presence of speckle degrades the image quality and adversely affects the performance of SAR image understanding applications.
We introduce SAR-DDPM, a denoising diffusion probabilistic model for SAR despeckling.
The proposed method achieves significant improvements in both quantitative and qualitative results over the state-of-the-art despeckling methods.
arXiv Detail & Related papers (2022-06-09T14:00:26Z) - Exploring Inter-frequency Guidance of Image for Lightweight Gaussian
Denoising [1.52292571922932]
We propose a novel network architecture denoted as IGNet, in order to refine the frequency bands from low to high in a progressive manner.
With this design, more inter-frequency prior and information are utilized, thus the model size can be lightened while still perserves competitive results.
arXiv Detail & Related papers (2021-12-22T10:35:53Z) - Designing a Practical Degradation Model for Deep Blind Image
Super-Resolution [134.9023380383406]
Single image super-resolution (SISR) methods would not perform well if the assumed degradation model deviates from those in real images.
This paper proposes to design a more complex but practical degradation model that consists of randomly shuffled blur, downsampling and noise degradations.
arXiv Detail & Related papers (2021-03-25T17:40:53Z) - Contextual colorization and denoising for low-light ultra high
resolution sequences [0.0]
Low-light image sequences generally suffer from incoherent noise, flicker and blurring of objects and moving objects.
We tackle these problems with an unpaired-learning method that offers simultaneous colorization and denoising.
We show that our method outperforms existing approaches in terms of subjective quality and that it is robust to variations in brightness levels and noise.
arXiv Detail & Related papers (2021-01-05T15:35:29Z) - Focal Frequency Loss for Image Reconstruction and Synthesis [125.7135706352493]
We show that narrowing gaps in the frequency domain can ameliorate image reconstruction and synthesis quality further.
We propose a novel focal frequency loss, which allows a model to adaptively focus on frequency components that are hard to synthesize.
arXiv Detail & Related papers (2020-12-23T17:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.