LAPIG: Language Guided Projector Image Generation with Surface Adaptation and Stylization
- URL: http://arxiv.org/abs/2503.12173v1
- Date: Sat, 15 Mar 2025 15:31:04 GMT
- Title: LAPIG: Language Guided Projector Image Generation with Surface Adaptation and Stylization
- Authors: Yuchen Deng, Haibin Ling, Bingyao Huang,
- Abstract summary: LAPIG takes the user text prompt as input and aims to transform the surface style using the projector.<n>Projection surface adaptation (PSA) can generate compensable surface stylization.<n> generated image is projected for visually pleasing surface style morphing effects.
- Score: 54.291669057240476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose LAPIG, a language guided projector image generation method with surface adaptation and stylization. LAPIG consists of a projector-camera system and a target textured projection surface. LAPIG takes the user text prompt as input and aims to transform the surface style using the projector. LAPIG's key challenge is that due to the projector's physical brightness limitation and the surface texture, the viewer's perceived projection may suffer from color saturation and artifacts in both dark and bright regions, such that even with the state-of-the-art projector compensation techniques, the viewer may see clear surface texture-related artifacts. Therefore, how to generate a projector image that follows the user's instruction while also displaying minimum surface artifacts is an open problem. To address this issue, we propose projection surface adaptation (PSA) that can generate compensable surface stylization. We first train two networks to simulate the projector compensation and project-and-capture processes, this allows us to find a satisfactory projector image without real project-and-capture and utilize gradient descent for fast convergence. Then, we design content and saturation losses to guide the projector image generation, such that the generated image shows no clearly perceivable artifacts when projected. Finally, the generated image is projected for visually pleasing surface style morphing effects. The source code and video are available on the project page: https://Yu-chen-Deng.github.io/LAPIG/.
Related papers
- Neural Projection Mapping Using Reflectance Fields [11.74757574153076]
We introduce a projector into a neural reflectance field that allows to calibrate the projector and photo realistic light editing.
Our neural field consists of three neural networks, estimating geometry, material, and transmittance.
We believe that neural projection mapping opens up the door to novel and exciting downstream tasks, through the joint optimization of the scene and projection images.
arXiv Detail & Related papers (2023-06-11T05:33:10Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - ENVIDR: Implicit Differentiable Renderer with Neural Environment
Lighting [9.145875902703345]
We introduce ENVIDR, a rendering and modeling framework for high-quality rendering and reconstruction of surfaces with challenging specular reflections.
We first propose a novel neural with decomposed rendering to learn the interaction between surface and environment lighting.
We then propose an SDF-based neural surface model that leverages this learned neural to represent general scenes.
arXiv Detail & Related papers (2023-03-23T04:12:07Z) - PixHt-Lab: Pixel Height Based Light Effect Generation for Image
Compositing [34.76980642388534]
Lighting effects such as shadows or reflections are key in making synthetic images realistic and visually appealing.
To generate such effects, traditional computer graphics uses a physically-based along with 3D geometry.
Recent deep learning-based approaches introduced a pixel height representation to generate soft shadows and reflections.
We introduce PixHt-Lab, a system leveraging an explicit mapping from pixel height representation to 3D space.
arXiv Detail & Related papers (2023-02-28T23:52:01Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - StyLitGAN: Prompting StyleGAN to Produce New Illumination Conditions [1.933681537640272]
We propose a novel method, StyLitGAN, for relighting and resurfacing generated images in the absence of labeled data.
Our approach generates images with realistic lighting effects, including cast shadows, soft shadows, inter-reflections, and glossy effects, without the need for paired or CGI data.
arXiv Detail & Related papers (2022-05-20T17:59:40Z) - Cutting Voxel Projector a New Approach to Construct 3D Cone Beam CT Operator [0.10923877073891444]
We introduce a novel class of projectors for 3D cone beam tomographic reconstruction.
Our method enables local refinement of voxels, allowing for adaptive grid resolution and improved reconstruction quality.
Results demonstrate that the cutting voxel projector achieves higher accuracy than the TT projector, especially for large cone beam angles.
arXiv Detail & Related papers (2021-10-19T10:54:01Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - Scene relighting with illumination estimation in the latent space on an
encoder-decoder scheme [68.8204255655161]
In this report we present methods that we tried to achieve that goal.
Our models are trained on a rendered dataset of artificial locations with varied scene content, light source location and color temperature.
With this dataset, we used a network with illumination estimation component aiming to infer and replace light conditions in the latent space representation of the concerned scenes.
arXiv Detail & Related papers (2020-06-03T15:25:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.