CRAYM: Neural Field Optimization via Camera RAY Matching
- URL: http://arxiv.org/abs/2412.01618v1
- Date: Mon, 02 Dec 2024 15:39:09 GMT
- Title: CRAYM: Neural Field Optimization via Camera RAY Matching
- Authors: Liqiang Lin, Wenpeng Wu, Chi-Wing Fu, Hao Zhang, Hui Huang,
- Abstract summary: We introduce camera ray matching (CRAYM) into the joint optimization of camera poses and neural fields from multi-view images.
We formulate our per-ray optimization and matched ray coherence by focusing on camera rays passing through keypoints in the input images.
- Score: 48.25100687172752
- License:
- Abstract: We introduce camera ray matching (CRAYM) into the joint optimization of camera poses and neural fields from multi-view images. The optimized field, referred to as a feature volume, can be "probed" by the camera rays for novel view synthesis (NVS) and 3D geometry reconstruction. One key reason for matching camera rays, instead of pixels as in prior works, is that the camera rays can be parameterized by the feature volume to carry both geometric and photometric information. Multi-view consistencies involving the camera rays and scene rendering can be naturally integrated into the joint optimization and network training, to impose physically meaningful constraints to improve the final quality of both the geometric reconstruction and photorealistic rendering. We formulate our per-ray optimization and matched ray coherence by focusing on camera rays passing through keypoints in the input images to elevate both the efficiency and accuracy of scene correspondences. Accumulated ray features along the feature volume provide a means to discount the coherence constraint amid erroneous ray matching. We demonstrate the effectiveness of CRAYM for both NVS and geometry reconstruction, over dense- or sparse-view settings, with qualitative and quantitative comparisons to state-of-the-art alternatives.
Related papers
- Self-Calibrating Gaussian Splatting for Large Field of View Reconstruction [30.529707438964596]
We present a self-calibrating framework that jointly optimize camera parameters, lens distortion and 3D Gaussian representations.
Our technique enables high-quality scene reconstruction from Large field-of-view (FOV) imagery taken with wide-angle lenses, allowing the scene to be modeled from a smaller number of images.
arXiv Detail & Related papers (2025-02-13T18:15:10Z) - Improving Robustness for Joint Optimization of Camera Poses and
Decomposed Low-Rank Tensorial Radiance Fields [26.4340697184666]
We propose an algorithm that allows joint refinement of camera pose and scene geometry represented by decomposed low-rank tensor.
We also propose techniques of smoothed 2D supervision, randomly scaled kernel parameters, and edge-guided loss mask.
arXiv Detail & Related papers (2024-02-20T18:59:02Z) - Towards Scalable Multi-View Reconstruction of Geometry and Materials [27.660389147094715]
We propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes.
The input are high-resolution RGBD images captured by a mobile, hand-held capture system with point lights for active illumination.
arXiv Detail & Related papers (2023-06-06T15:07:39Z) - Neural Lens Modeling [50.57409162437732]
NeuroLens is a neural lens model for distortion and vignetting that can be used for point projection and ray casting.
It can be used to perform pre-capture calibration using classical calibration targets, and can later be used to perform calibration or refinement during 3D reconstruction.
The model generalizes across many lens types and is trivial to integrate into existing 3D reconstruction and rendering systems.
arXiv Detail & Related papers (2023-04-10T20:09:17Z) - Enhanced Stable View Synthesis [86.69338893753886]
We introduce an approach to enhance the novel view synthesis from images taken from a freely moving camera.
The introduced approach focuses on outdoor scenes where recovering accurate geometric scaffold and camera pose is challenging.
arXiv Detail & Related papers (2023-03-30T01:53:14Z) - Adaptive Joint Optimization for 3D Reconstruction with Differentiable
Rendering [22.2095090385119]
Given an imperfect reconstructed 3D model, most previous methods have focused on the refinement of either geometry, texture, or camera pose.
We propose a novel optimization approach based on differentiable rendering, which integrates the optimization of camera pose, geometry, and texture into a unified framework.
Using differentiable rendering, an image-level adversarial loss is applied to further improve the 3D model, making it more photorealistic.
arXiv Detail & Related papers (2022-08-15T04:32:41Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - ARVo: Learning All-Range Volumetric Correspondence for Video Deblurring [92.40655035360729]
Video deblurring models exploit consecutive frames to remove blurs from camera shakes and object motions.
We propose a novel implicit method to learn spatial correspondence among blurry frames in the feature space.
Our proposed method is evaluated on the widely-adopted DVD dataset, along with a newly collected High-Frame-Rate (1000 fps) dataset for Video Deblurring.
arXiv Detail & Related papers (2021-03-07T04:33:13Z) - Large Scale Photometric Bundle Adjustment [9.184692492399686]
offline 3-d reconstruction from internet images has not yet benefited from a joint, photometric optimization over dense geometry and camera parameters.
This work presents a framework for jointly optimizing millions of scene points and hundreds of camera poses and intrinsics.
The improvement in metric reconstruction accuracy that it confers over feature-based bundle adjustment is demonstrated on the large-scale Tanks & Temples benchmark.
arXiv Detail & Related papers (2020-08-26T18:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.