ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real
Novel View Synthesis via Contrastive Learning
- URL: http://arxiv.org/abs/2303.11052v3
- Date: Fri, 23 Jun 2023 02:09:42 GMT
- Title: ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real
Novel View Synthesis via Contrastive Learning
- Authors: Hao Yang, Lanqing Hong, Aoxue Li, Tianyang Hu, Zhenguo Li, Gim Hee
Lee, Liwei Wang
- Abstract summary: We first investigate the effects of synthetic data in synthetic-to-real novel view synthesis.
We propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints.
Our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS.
- Score: 102.46382882098847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although many recent works have investigated generalizable NeRF-based novel
view synthesis for unseen scenes, they seldom consider the synthetic-to-real
generalization, which is desired in many practical applications. In this work,
we first investigate the effects of synthetic data in synthetic-to-real novel
view synthesis and surprisingly observe that models trained with synthetic data
tend to produce sharper but less accurate volume densities. For pixels where
the volume densities are correct, fine-grained details will be obtained.
Otherwise, severe artifacts will be produced. To maintain the advantages of
using synthetic data while avoiding its negative effects, we propose to
introduce geometry-aware contrastive learning to learn multi-view consistent
features with geometric constraints. Meanwhile, we adopt cross-view attention
to further enhance the geometry perception of features by querying features
across input views. Experiments demonstrate that under the synthetic-to-real
setting, our method can render images with higher quality and better
fine-grained details, outperforming existing generalizable novel view synthesis
methods in terms of PSNR, SSIM, and LPIPS. When trained on real data, our
method also achieves state-of-the-art results.
Related papers
- Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - UAV-Sim: NeRF-based Synthetic Data Generation for UAV-based Perception [62.71374902455154]
We leverage recent advancements in neural rendering to improve static and dynamic novelview UAV-based image rendering.
We demonstrate a considerable performance boost when a state-of-the-art detection model is optimized primarily on hybrid sets of real and synthetic data.
arXiv Detail & Related papers (2023-10-25T00:20:37Z) - You Don't Have to Be Perfect to Be Amazing: Unveil the Utility of
Synthetic Images [2.0790547421662064]
We have established a comprehensive set of evaluators for synthetic images, including fidelity, variety, privacy, and utility.
By analyzing more than 100k chest X-ray images and their synthetic copies, we have demonstrated that there is an inevitable trade-off between synthetic image fidelity, variety, and privacy.
arXiv Detail & Related papers (2023-05-25T13:47:04Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - Cascaded and Generalizable Neural Radiance Fields for Fast View
Synthesis [35.035125537722514]
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis.
We first train CG-NeRF on multiple 3D scenes of the DTU dataset.
We show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
arXiv Detail & Related papers (2022-08-09T12:23:48Z) - ProbNVS: Fast Novel View Synthesis with Learned Probability-Guided
Sampling [42.37704606186928]
We propose to build a novel view synthesis framework based on learned MVS priors.
We show that our method achieves 15 to 40 times faster rendering compared to state-of-the-art baselines.
arXiv Detail & Related papers (2022-04-07T14:45:42Z) - A Scaling Law for Synthetic-to-Real Transfer: A Measure of Pre-Training [52.93808218720784]
Synthetic-to-real transfer learning is a framework in which we pre-train models with synthetically generated images and ground-truth annotations for real tasks.
Although synthetic images overcome the data scarcity issue, it remains unclear how the fine-tuning performance scales with pre-trained models.
We observe a simple and general scaling law that consistently describes learning curves in various tasks, models, and complexities of synthesized pre-training data.
arXiv Detail & Related papers (2021-08-25T02:29:28Z) - Synthetic Data for Model Selection [2.4499092754102874]
We show that synthetic data can be beneficial for model selection.
We introduce a novel method to calibrate the synthetic error estimation to fit that of the real domain.
arXiv Detail & Related papers (2021-05-03T09:52:03Z) - Synthetic Data and Hierarchical Object Detection in Overhead Imagery [0.0]
We develop novel synthetic data generation and augmentation techniques for enhancing low/zero-sample learning in satellite imagery.
To test the effectiveness of synthetic imagery, we employ it in the training of detection models and our two stage model, and evaluate the resulting models on real satellite images.
arXiv Detail & Related papers (2021-01-29T22:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.