CompenHR: Efficient Full Compensation for High-resolution Projector
- URL: http://arxiv.org/abs/2311.13409v2
- Date: Tue, 28 Nov 2023 12:12:46 GMT
- Title: CompenHR: Efficient Full Compensation for High-resolution Projector
- Authors: Yuxi Wang, Haibin Ling, Bingyao Huang
- Abstract summary: Full projector compensation is a practical task of projector-camera systems.
It aims to find a projector input image, named compensation image, such that when projected it cancels the geometric and photometric distortions.
State-of-the-art methods use deep learning to address this problem and show promising performance for low-resolution setups.
However, directly applying deep learning to high-resolution setups is impractical due to the long training time and high memory cost.
- Score: 68.42060996280064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Full projector compensation is a practical task of projector-camera systems.
It aims to find a projector input image, named compensation image, such that
when projected it cancels the geometric and photometric distortions due to the
physical environment and hardware. State-of-the-art methods use deep learning
to address this problem and show promising performance for low-resolution
setups. However, directly applying deep learning to high-resolution setups is
impractical due to the long training time and high memory cost. To address this
issue, this paper proposes a practical full compensation solution. Firstly, we
design an attention-based grid refinement network to improve geometric
correction quality. Secondly, we integrate a novel sampling scheme into an
end-to-end compensation network to alleviate computation and introduce
attention blocks to preserve key features. Finally, we construct a benchmark
dataset for high-resolution projector full compensation. In experiments, our
method demonstrates clear advantages in both efficiency and quality.
Related papers
- Task-driven single-image super-resolution reconstruction of document scans [2.8391355909797644]
We investigate the possibility of employing super-resolution as a preprocessing step to improve optical character recognition from document scans.
To achieve that, we propose to train deep networks for single-image super-resolution in a task-driven way to make them better adapted for the purpose of text detection.
arXiv Detail & Related papers (2024-07-12T05:18:26Z) - FuseSR: Super Resolution for Real-time Rendering through Efficient
Multi-resolution Fusion [38.67110413800048]
One of the most popular solutions is to render images at a low resolution to reduce rendering overhead.
In this paper, we propose an efficient and effective super-resolution method that predicts high-quality upsampled reconstructions.
Experiments show that our method is able to produce temporally consistent reconstructions in $4 times 4$ and even challenging $8 times 8$ upsampling cases at 4K resolution with real-time performance.
arXiv Detail & Related papers (2023-10-15T04:01:05Z) - Rethinking Resolution in the Context of Efficient Video Recognition [49.957690643214576]
Cross-resolution KD (ResKD) is a simple but effective method to boost recognition accuracy on low-resolution frames.
We extensively demonstrate its effectiveness over state-of-the-art architectures, i.e., 3D-CNNs and Video Transformers.
arXiv Detail & Related papers (2022-09-26T15:50:44Z) - Dynamic Low-Resolution Distillation for Cost-Efficient End-to-End Text
Spotting [49.33891486324731]
We propose a novel cost-efficient Dynamic Low-resolution Distillation (DLD) text spotting framework.
It aims to infer images in different small but recognizable resolutions and achieve a better balance between accuracy and efficiency.
The proposed method can be optimized end-to-end and adopted in any current text spotting framework to improve the practicability.
arXiv Detail & Related papers (2022-07-14T06:49:59Z) - Total Variation Optimization Layers for Computer Vision [130.10996341231743]
We propose total variation (TV) minimization as a layer for computer vision.
Motivated by the success of total variation in image processing, we hypothesize that TV as a layer provides useful inductive bias for deep-nets.
We study this hypothesis on five computer vision tasks: image classification, weakly supervised object localization, edge-preserving smoothing, edge detection, and image denoising.
arXiv Detail & Related papers (2022-04-07T17:59:27Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Analysis and evaluation of Deep Learning based Super-Resolution
algorithms to improve performance in Low-Resolution Face Recognition [0.0]
Super-resolution algorithms may be able to recover the discriminant properties of the subjects involved.
This project aimed at evaluating and adapting different deep neural network architectures for the task of face super-resolution.
Experiments showed that general super-resolution architectures might enhance face verification performance of deep neural networks trained on high-resolution faces.
arXiv Detail & Related papers (2021-01-19T02:41:57Z) - End-to-end Full Projector Compensation [81.19324259967742]
Full projector compensation aims to modify a projector input image to compensate for both geometric and photometric disturbance of the projection surface.
In this paper, we propose the first end-to-end differentiable solution, named CompenNeSt++, to solve the two problems jointly.
arXiv Detail & Related papers (2020-07-30T18:23:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.