Toward Real-World Light Field Super-Resolution
- URL: http://arxiv.org/abs/2305.18994v1
- Date: Tue, 30 May 2023 12:46:50 GMT
- Title: Toward Real-World Light Field Super-Resolution
- Authors: Zeyu Xiao, Ruisheng Gao, Yutong Liu, Yueyi Zhang, Zhiwei Xiong
- Abstract summary: We introduce LytroZoom, the first real-world light field SR dataset capturing paired low- and high-resolution light fields of diverse indoor and outdoor scenes using a Lytro ILLUM camera.
We also propose the Omni-Frequency Projection Network (OFPNet), which decomposes the omni-frequency components and iteratively enhances them through frequency projection operations.
Experiments demonstrate that models trained on LytroZoom outperform those trained on synthetic datasets and are generalizable to diverse content and devices.
- Score: 39.90540075718412
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep learning has opened up new possibilities for light field
super-resolution (SR), but existing methods trained on synthetic datasets with
simple degradations (e.g., bicubic downsampling) suffer from poor performance
when applied to complex real-world scenarios. To address this problem, we
introduce LytroZoom, the first real-world light field SR dataset capturing
paired low- and high-resolution light fields of diverse indoor and outdoor
scenes using a Lytro ILLUM camera. Additionally, we propose the Omni-Frequency
Projection Network (OFPNet), which decomposes the omni-frequency components and
iteratively enhances them through frequency projection operations to address
spatially variant degradation processes present in all frequency components.
Experiments demonstrate that models trained on LytroZoom outperform those
trained on synthetic datasets and are generalizable to diverse content and
devices. Quantitative and qualitative evaluations verify the superiority of
OFPNet. We believe this work will inspire future research in real-world light
field SR.
Related papers
- Incorporating Degradation Estimation in Light Field Spatial Super-Resolution [54.603510192725786]
We present LF-DEST, an effective blind Light Field SR method that incorporates explicit Degradation Estimation to handle various degradation types.
We conduct extensive experiments on benchmark datasets, demonstrating that LF-DEST achieves superior performance across a variety of degradation scenarios in light field SR.
arXiv Detail & Related papers (2024-05-11T13:14:43Z) - PU-Ray: Domain-Independent Point Cloud Upsampling via Ray Marching on Neural Implicit Surface [5.78575346449322]
We propose a new ray-based upsampling approach with an arbitrary rate, where a depth prediction is made for each query ray and its corresponding patch.
Our novel method simulates the sphere-tracing ray marching algorithm on the neural implicit surface defined with an unsigned distance function (UDF)
The rule-based mid-point query sampling method generates more evenly distributed points without requiring an end-to-end model trained using a nearest-neighbor-based reconstruction loss function.
arXiv Detail & Related papers (2023-10-12T22:45:03Z) - Expanding Synthetic Real-World Degradations for Blind Video Super
Resolution [3.474523163017713]
Video super-resolution (VSR) techniques have drastically improved over the last few years and shown impressive performance on synthetic data.
However, their performance on real-world video data suffers because of the complexity of real-world degradations and misaligned video frames.
In this paper, we propose real-world degradations on synthetic training datasets.
arXiv Detail & Related papers (2023-05-04T08:58:31Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - A Large-Scale Outdoor Multi-modal Dataset and Benchmark for Novel View
Synthesis and Implicit Scene Reconstruction [26.122654478946227]
Neural Radiance Fields (NeRF) has achieved impressive results in single object scene reconstruction and novel view synthesis.
There is no unified outdoor scene dataset for large-scale NeRF evaluation due to expensive data acquisition and calibration costs.
In this paper, we propose a large-scale outdoor multi-modal dataset, OMMO dataset, containing complex land objects and scenes with calibrated images, point clouds and prompt annotations.
arXiv Detail & Related papers (2023-01-17T10:15:32Z) - IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable
Novel View Synthesis [90.03590032170169]
We present intrinsic neural radiance fields, dubbed IntrinsicNeRF, which introduce intrinsic decomposition into the NeRF-based neural rendering method.
Our experiments and editing samples on both object-specific/room-scale scenes and synthetic/real-word data demonstrate that we can obtain consistent intrinsic decomposition results.
arXiv Detail & Related papers (2022-10-02T22:45:11Z) - RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis [104.53930611219654]
We present a large-scale synthetic dataset for novel view synthesis consisting of 300k images rendered from nearly 2000 complex scenes.
The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis.
Using 4 distinct sources of high-quality 3D meshes, the scenes of our dataset exhibit challenging variations in camera views, lighting, shape, materials, and textures.
arXiv Detail & Related papers (2022-05-14T13:15:32Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.