End-to-end Full Projector Compensation
- URL: http://arxiv.org/abs/2008.00965v3
- Date: Thu, 7 Jan 2021 18:49:49 GMT
- Title: End-to-end Full Projector Compensation
- Authors: Bingyao Huang, Tao Sun, Haibin Ling
- Abstract summary: Full projector compensation aims to modify a projector input image to compensate for both geometric and photometric disturbance of the projection surface.
In this paper, we propose the first end-to-end differentiable solution, named CompenNeSt++, to solve the two problems jointly.
- Score: 81.19324259967742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Full projector compensation aims to modify a projector input image to
compensate for both geometric and photometric disturbance of the projection
surface. Traditional methods usually solve the two parts separately and may
suffer from suboptimal solutions. In this paper, we propose the first
end-to-end differentiable solution, named CompenNeSt++, to solve the two
problems jointly. First, we propose a novel geometric correction subnet, named
WarpingNet, which is designed with a cascaded coarse-to-fine structure to learn
the sampling grid directly from sampling images. Second, we propose a novel
photometric compensation subnet, named CompenNeSt, which is designed with a
siamese architecture to capture the photometric interactions between the
projection surface and the projected images, and to use such information to
compensate the geometrically corrected images. By concatenating WarpingNet with
CompenNeSt, CompenNeSt++ accomplishes full projector compensation and is
end-to-end trainable. Third, to improve practicability, we propose a novel
synthetic data-based pre-training strategy to significantly reduce the number
of training images and training time. Moreover, we construct the first
setup-independent full compensation benchmark to facilitate future studies. In
thorough experiments, our method shows clear advantages over prior art with
promising compensation quality and meanwhile being practically convenient.
Related papers
- CompenHR: Efficient Full Compensation for High-resolution Projector [68.42060996280064]
Full projector compensation is a practical task of projector-camera systems.
It aims to find a projector input image, named compensation image, such that when projected it cancels the geometric and photometric distortions.
State-of-the-art methods use deep learning to address this problem and show promising performance for low-resolution setups.
However, directly applying deep learning to high-resolution setups is impractical due to the long training time and high memory cost.
arXiv Detail & Related papers (2023-11-22T14:13:27Z) - Revealing the preference for correcting separated aberrations in joint
optic-image design [19.852225245159598]
We characterize the optics with separated aberrations to achieve efficient joint design of complex systems such as smartphones and drones.
An image simulation system is presented to reproduce the genuine imaging procedure of lenses with large field-of-views.
Experiments reveal that the preference for correcting separated aberrations in joint design is as follows: longitudinal chromatic aberration, lateral chromatic aberration, spherical aberration, field curvature, and coma, with astigmatism coming last.
arXiv Detail & Related papers (2023-09-08T14:12:03Z) - Neural Projection Mapping Using Reflectance Fields [11.74757574153076]
We introduce a projector into a neural reflectance field that allows to calibrate the projector and photo realistic light editing.
Our neural field consists of three neural networks, estimating geometry, material, and transmittance.
We believe that neural projection mapping opens up the door to novel and exciting downstream tasks, through the joint optimization of the scene and projection images.
arXiv Detail & Related papers (2023-06-11T05:33:10Z) - MS-PS: A Multi-Scale Network for Photometric Stereo With a New
Comprehensive Training Dataset [0.0]
Photometric stereo (PS) problem consists in reconstructing the 3D-surface of an object.
We propose a multi-scale architecture for PS which, combined with a new dataset, yields state-of-the-art results.
arXiv Detail & Related papers (2022-11-25T14:01:54Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - Content-aware Warping for View Synthesis [110.54435867693203]
We propose content-aware warping, which adaptively learns the weights for pixels of a relatively large neighborhood from their contextual information via a lightweight neural network.
Based on this learnable warping module, we propose a new end-to-end learning-based framework for novel view synthesis from two source views.
Experimental results on structured light field datasets with wide baselines and unstructured multi-view datasets show that the proposed method significantly outperforms state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2022-01-22T11:35:05Z) - Spatial-Separated Curve Rendering Network for Efficient and
High-Resolution Image Harmonization [59.19214040221055]
We propose a novel spatial-separated curve rendering network (S$2$CRNet) for efficient and high-resolution image harmonization.
The proposed method reduces more than 90% parameters compared with previous methods.
Our method can work smoothly on higher resolution images in real-time which is more than 10$times$ faster than the existing methods.
arXiv Detail & Related papers (2021-09-13T07:20:16Z) - Uncalibrated Neural Inverse Rendering for Photometric Stereo of General
Surfaces [103.08512487830669]
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem.
Existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both.
We propose an uncalibrated neural inverse rendering approach to this problem.
arXiv Detail & Related papers (2020-12-12T10:33:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.