Intensity and Texture Correction of Omnidirectional Image Using Camera Images for Indirect Augmented Reality
- URL: http://arxiv.org/abs/2405.16008v1
- Date: Sat, 25 May 2024 02:14:07 GMT
- Title: Intensity and Texture Correction of Omnidirectional Image Using Camera Images for Indirect Augmented Reality
- Authors: Hakim Ikebayashi, Norihiko Kawai,
- Abstract summary: Augmented reality (AR) using camera images in mobile devices is becoming popular for tourism promotion.
obstructions such as tourists appearing in the camera images may cause the camera pose estimation error.
We propose a method for correcting the intensity and texture of a past omnidirectional image using camera images from mobile devices.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Augmented reality (AR) using camera images in mobile devices is becoming popular for tourism promotion. However, obstructions such as tourists appearing in the camera images may cause the camera pose estimation error, resulting in CG misalignment and reduced visibility of the contents. To avoid this problem, Indirect AR (IAR), which does not use real-time camera images, has been proposed. In this method, an omnidirectional image is captured and virtual objects are synthesized on the image in advance. Users can experience AR by viewing a scene extracted from the synthesized omnidirectional image according to the device's sensor. This enables robustness and high visibility. However, if the weather conditions and season in the pre-captured 360 images differs from the current weather conditions and season when AR is experienced, the realism of the AR experience is reduced. To overcome the problem, we propose a method for correcting the intensity and texture of a past omnidirectional image using camera images from mobile devices. We first perform semantic segmentation. We then reproduce the current sky pattern by panoramic image composition and inpainting. For the other areas, we correct the intensity by histogram matching. In experiments, we show the effectiveness of the proposed method using various scenes.
Related papers
- OmniColor: A Global Camera Pose Optimization Approach of LiDAR-360Camera Fusion for Colorizing Point Clouds [15.11376768491973]
A Colored point cloud, as a simple and efficient 3D representation, has many advantages in various fields.
This paper presents OmniColor, a novel and efficient algorithm to colorize point clouds using an independent 360-degree camera.
arXiv Detail & Related papers (2024-04-06T17:41:36Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Real-Time Under-Display Cameras Image Restoration and HDR on Mobile
Devices [81.61356052916855]
The images captured by under-display cameras (UDCs) are degraded by the screen in front of them.
Deep learning methods for image restoration can significantly reduce the degradation of captured images.
We propose a lightweight model for blind UDC Image Restoration and HDR, and we also provide a benchmark comparing the performance and runtime of different methods on smartphones.
arXiv Detail & Related papers (2022-11-25T11:46:57Z) - Perceptual Image Enhancement for Smartphone Real-Time Applications [60.45737626529091]
We propose LPIENet, a lightweight network for perceptual image enhancement.
Our model can deal with noise artifacts, diffraction artifacts, blur, and HDR overexposure.
Our model can process 2K resolution images under 1 second in mid-level commercial smartphones.
arXiv Detail & Related papers (2022-10-24T19:16:33Z) - SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary
Image collections [49.3480550339732]
Inverse rendering of an object under entirely unknown capture conditions is a fundamental challenge in computer vision and graphics.
We propose a joint optimization framework to estimate the shape, BRDF, and per-image camera pose and illumination.
Our method works on in-the-wild online image collections of an object and produces relightable 3D assets for several use-cases such as AR/VR.
arXiv Detail & Related papers (2022-05-31T13:16:48Z) - Zero-Reference Image Restoration for Under-Display Camera of UAV [10.498049147922258]
We propose a new method to enhance the visual experience by enhancing the texture and color of images.
Our method trains a lightweight network to estimate a low-rank affine grid on the input image.
Our model can perform high-quality recovery of images of arbitrary resolution in real time.
arXiv Detail & Related papers (2022-02-13T11:12:00Z) - Robust Glare Detection: Review, Analysis, and Dataset Release [6.281101654856357]
Sun Glare widely exists in the images captured by unmanned ground and aerial vehicles performing in outdoor environments.
The source of glare is not limited to the sun, and glare can be seen in the images captured during the nighttime and in indoor environments.
This research aims to introduce the first dataset for glare detection, which includes images captured by different cameras.
arXiv Detail & Related papers (2021-10-12T13:46:33Z) - Minimal Solutions for Panoramic Stitching Given Gravity Prior [53.047330182598124]
We propose new minimal solutions to panoramic image stitching of images taken by cameras with coinciding optical centers.
We consider four practical camera configurations, assuming unknown fixed or varying focal length with or without radial distortion.
The solvers are tested both on synthetic scenes and on more than 500k real image pairs from the Sun360 dataset and from scenes captured by us using two smartphones equipped with IMUs.
arXiv Detail & Related papers (2020-12-01T13:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.