Multi-Frequency-Aware Patch Adversarial Learning for Neural Point Cloud
Rendering
- URL: http://arxiv.org/abs/2210.03693v1
- Date: Fri, 7 Oct 2022 16:54:15 GMT
- Title: Multi-Frequency-Aware Patch Adversarial Learning for Neural Point Cloud
Rendering
- Authors: Jay Karhade, Haiyue Zhu, Ka-Shing Chung, Rajesh Tripathy, Wei Lin,
Marcelo H. Ang Jr
- Abstract summary: We present a neural point cloud rendering pipeline through a novel multi-frequency-aware patch adversarial learning framework.
The proposed approach aims to improve the rendering realness by minimizing the spectrum discrepancy between real and synthesized images.
Our method produces state-of-the-art results for neural point cloud rendering by a significant margin.
- Score: 7.522462414919854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a neural point cloud rendering pipeline through a novel
multi-frequency-aware patch adversarial learning framework. The proposed
approach aims to improve the rendering realness by minimizing the spectrum
discrepancy between real and synthesized images, especially on the
high-frequency localized sharpness information which causes image blur
visually. Specifically, a patch multi-discriminator scheme is proposed for the
adversarial learning, which combines both spectral domain (Fourier Transform
and Discrete Wavelet Transform) discriminators as well as the spatial (RGB)
domain discriminator to force the generator to capture global and local
spectral distributions of the real images. The proposed multi-discriminator
scheme not only helps to improve rendering realness, but also enhance the
convergence speed and stability of adversarial learning. Moreover, we introduce
a noise-resistant voxelisation approach by utilizing both the appearance
distance and spatial distance to exclude the spatial outlier points caused by
depth noise. Our entire architecture is fully differentiable and can be learned
in an end-to-end fashion. Extensive experiments show that our method produces
state-of-the-art results for neural point cloud rendering by a significant
margin. Our source code will be made public at a later date.
Related papers
- Robust Network Learning via Inverse Scale Variational Sparsification [55.64935887249435]
We introduce an inverse scale variational sparsification framework within a time-continuous inverse scale space formulation.
Unlike frequency-based methods, our approach not only removes noise by smoothing small-scale features.
We show the efficacy of our approach through enhanced robustness against various noise types.
arXiv Detail & Related papers (2024-09-27T03:17:35Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - DPFNet: A Dual-branch Dilated Network with Phase-aware Fourier
Convolution for Low-light Image Enhancement [1.2645663389012574]
Low-light image enhancement is a classical computer vision problem aiming to recover normal-exposure images from low-light images.
convolutional neural networks commonly used in this field are good at sampling low-frequency local structural features in the spatial domain.
We propose a novel module using the Fourier coefficients, which can recover high-quality texture details under the constraint of semantics in the frequency phase.
arXiv Detail & Related papers (2022-09-16T13:56:09Z) - Differentiable Point-Based Radiance Fields for Efficient View Synthesis [57.56579501055479]
We propose a differentiable rendering algorithm for efficient novel view synthesis.
Our method is up to 300x faster than NeRF in both training and inference.
For dynamic scenes, our method trains two orders of magnitude faster than STNeRF and renders at near interactive rate.
arXiv Detail & Related papers (2022-05-28T04:36:13Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.