Beyond the Pixel: a Photometrically Calibrated HDR Dataset for Luminance
and Color Prediction
- URL: http://arxiv.org/abs/2304.12372v3
- Date: Fri, 13 Oct 2023 12:58:41 GMT
- Title: Beyond the Pixel: a Photometrically Calibrated HDR Dataset for Luminance
and Color Prediction
- Authors: Christophe Bolduc, Justine Giroux, Marc H\'ebert, Claude Demers, and
Jean-Fran\c{c}ois Lalonde
- Abstract summary: Laval Photometric Indoor HDR dataset is the first large-scale photometrically calibrated dataset of high dynamic range 360deg panoramas.
We do so by accurately capturing RAW bracketed exposures simultaneously with a professional photometric measurement device.
The resulting dataset is a rich representation of indoor scenes which displays a wide range of illuminance and color, and varied types of light sources.
- Score: 0.7456526005219319
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light plays an important role in human well-being. However, most computer
vision tasks treat pixels without considering their relationship to physical
luminance. To address this shortcoming, we introduce the Laval Photometric
Indoor HDR Dataset, the first large-scale photometrically calibrated dataset of
high dynamic range 360{\deg} panoramas. Our key contribution is the calibration
of an existing, uncalibrated HDR Dataset. We do so by accurately capturing RAW
bracketed exposures simultaneously with a professional photometric measurement
device (chroma meter) for multiple scenes across a variety of lighting
conditions. Using the resulting measurements, we establish the calibration
coefficients to be applied to the HDR images. The resulting dataset is a rich
representation of indoor scenes which displays a wide range of illuminance and
color, and varied types of light sources. We exploit the dataset to introduce
three novel tasks, where: per-pixel luminance, per-pixel color and planar
illuminance can be predicted from a single input image. Finally, we also
capture another smaller photometric dataset with a commercial 360{\deg} camera,
to experiment on generalization across cameras. We are optimistic that the
release of our datasets and associated code will spark interest in physically
accurate light estimation within the community. Dataset and code are available
at https://lvsn.github.io/beyondthepixel/.
Related papers
- HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - GTA-HDR: A Large-Scale Synthetic Dataset for HDR Image Reconstruction [11.610543327501995]
High Dynamic Range (i.e., images and videos) has a broad range of applications.
High Dynamic Range (i.e., images and videos) has a broad range of applications.
The challenging task of reconstructing visually accurate HDR images from their Low Dynamic Range (LDR) counterparts is gaining attention in the vision research community.
arXiv Detail & Related papers (2024-03-26T16:24:42Z) - Event Fusion Photometric Stereo Network [3.0778023655689144]
We introduce a novel method to estimate surface normal of an object in an ambient light environment using RGB and event cameras.
This is the first study to use event cameras for photometric stereo in continuous light sources and ambient light environments.
arXiv Detail & Related papers (2023-03-01T08:13:26Z) - Reversed Image Signal Processing and RAW Reconstruction. AIM 2022
Challenge Report [109.2135194765743]
This paper introduces the AIM 2022 Challenge on Reversed Image Signal Processing and RAW Reconstruction.
We aim to recover raw sensor images from the corresponding RGBs without metadata and, by doing this, "reverse" the ISP transformation.
arXiv Detail & Related papers (2022-10-20T10:43:53Z) - Learning Enriched Illuminants for Cross and Single Sensor Color
Constancy [182.4997117953705]
We propose cross-sensor self-supervised training to train the network.
We train the network by randomly sampling the artificial illuminants in a sensor-independent manner.
Experiments show that our cross-sensor model and single-sensor model outperform other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-03-21T15:45:35Z) - Multi-sensor large-scale dataset for multi-view 3D reconstruction [63.59401680137808]
We present a new multi-sensor dataset for multi-view 3D surface reconstruction.
It includes registered RGB and depth data from sensors of different resolutions and modalities: smartphones, Intel RealSense, Microsoft Kinect, industrial cameras, and structured-light scanner.
We provide around 1.4 million images of 107 different scenes acquired from 100 viewing directions under 14 lighting conditions.
arXiv Detail & Related papers (2022-03-11T17:32:27Z) - Colored Point Cloud to Image Alignment [15.828285556159026]
We introduce a differential optimization method that aligns a colored point cloud to a given color image via iterative geometric and color matching.
We find the transformation between the camera image and the point cloud colors by iterating between matching the relative location of the point cloud and matching colors.
arXiv Detail & Related papers (2021-10-07T08:12:56Z) - The Cube++ Illumination Estimation Dataset [50.58610459038332]
A new illumination estimation dataset is proposed in this paper.
It consists of 4890 images with known illumination colors as well as with additional semantic data.
The dataset can be used for training and testing of methods that perform single or two-illuminant estimation.
arXiv Detail & Related papers (2020-11-19T18:50:08Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.