Beyond Calibration: Physically Informed Learning for Raw-to-Raw Mapping
- URL: http://arxiv.org/abs/2506.08650v2
- Date: Wed, 11 Jun 2025 03:36:46 GMT
- Title: Beyond Calibration: Physically Informed Learning for Raw-to-Raw Mapping
- Authors: Peter Grönquist, Stepan Tulyakov, Dengxin Dai,
- Abstract summary: Existing raw-to-raw conversion methods face limitations such as poor adaptability to changing illumination, high computational costs, or impractical requirements such as simultaneous camera operation and overlapping fields-of-view.<n>We introduce the Neural Physical Model (NPM), a lightweight, physically-informed approach that simulates raw images under specified illumination to estimate transformations between devices.
- Score: 45.35073576457164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving consistent color reproduction across multiple cameras is essential for seamless image fusion and Image Processing Pipeline (ISP) compatibility in modern devices, but it is a challenging task due to variations in sensors and optics. Existing raw-to-raw conversion methods face limitations such as poor adaptability to changing illumination, high computational costs, or impractical requirements such as simultaneous camera operation and overlapping fields-of-view. We introduce the Neural Physical Model (NPM), a lightweight, physically-informed approach that simulates raw images under specified illumination to estimate transformations between devices. The NPM effectively adapts to varying illumination conditions, can be initialized with physical measurements, and supports training with or without paired data. Experiments on public datasets like NUS and BeyondRGB demonstrate that NPM outperforms recent state-of-the-art methods, providing robust chromatic consistency across different sensors and optical systems.
Related papers
- Towards Realistic Low-Light Image Enhancement via ISP Driven Data Modeling [61.95831392879045]
Deep neural networks (DNNs) have recently become the leading method for low-light image enhancement (LLIE)<n>Despite significant progress, their outputs may still exhibit issues such as amplified noise, incorrect white balance, or unnatural enhancements when deployed in real world applications.<n>A key challenge is the lack of diverse, large scale training data that captures the complexities of low-light conditions and imaging pipelines.<n>We propose a novel image signal processing (ISP) driven data synthesis pipeline that addresses these challenges by generating unlimited paired training data.
arXiv Detail & Related papers (2025-04-16T15:53:53Z) - DPCS: Path Tracing-Based Differentiable Projector-Camera Systems [49.69815958689441]
Projector-camera systems (ProCams) simulation aims to model the physical project-and-capture process and associated scene parameters of a ProCams.<n>Recent advances use an end-to-end neural network to learn the project-and-capture process.<n>We introduce a novel path tracing-based differentiable projector-camera systems (DPCS), offering a differentiable ProCams simulation method.
arXiv Detail & Related papers (2025-03-15T15:31:18Z) - RAW-Adapter: Adapting Pre-trained Visual Model to Camera RAW Images [51.68432586065828]
We introduce RAW-Adapter, a novel approach aimed at adapting sRGB pre-trained models to camera RAW data.
Raw-Adapter comprises input-level adapters that employ learnable ISP stages to adjust RAW inputs, as well as model-level adapters to build connections between ISP stages and subsequent high-level networks.
arXiv Detail & Related papers (2024-08-27T06:14:54Z) - Implicit Multi-Spectral Transformer: An Lightweight and Effective Visible to Infrared Image Translation Model [0.6817102408452475]
In computer vision, visible light images often exhibit low contrast in low-light conditions, presenting a significant challenge.
Recent advancements in deep learning, particularly the deployment of Generative Adversarial Networks (GANs), have facilitated the transformation of visible light images to infrared images.
We propose a novel end-to-end Transformer-based model that efficiently converts visible light images into high-fidelity infrared images.
arXiv Detail & Related papers (2024-04-10T15:02:26Z) - Dimma: Semi-supervised Low Light Image Enhancement with Adaptive Dimming [0.728258471592763]
Enhancing low-light images while maintaining natural colors is a challenging problem due to camera processing variations.
We propose Dimma, a semi-supervised approach that aligns with any camera by utilizing a small set of image pairs.
We achieve that by introducing a convolutional mixture density network that generates distorted colors of the scene based on the illumination differences.
arXiv Detail & Related papers (2023-10-14T17:59:46Z) - Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the
Noise Model [83.9497193551511]
We introduce Lighting Every Darkness (LED), which is effective regardless of the digital gain or the camera sensor.
LED eliminates the need for explicit noise model calibration, instead utilizing an implicit fine-tuning process that allows quick deployment and requires minimal data.
LED also allows researchers to focus more on deep learning advancements while still utilizing sensor engineering benefits.
arXiv Detail & Related papers (2023-08-07T10:09:11Z) - GenISP: Neural ISP for Low-Light Machine Cognition [19.444297600977546]
In low-light conditions, object detectors using raw image data are more robust than detectors using image data processed by an ISP pipeline.
We propose a minimal neural ISP pipeline for machine cognition, named GenISP, that explicitly incorporates Color Space Transformation to a device-independent color space.
arXiv Detail & Related papers (2022-05-07T17:17:24Z) - Lightweight HDR Camera ISP for Robust Perception in Dynamic Illumination
Conditions via Fourier Adversarial Networks [35.532434169432776]
We propose a lightweight two-stage image enhancement algorithm sequentially balancing illumination and noise removal.
We also propose a Fourier spectrum-based adversarial framework (AFNet) for consistent image enhancement under varying illumination conditions.
Based on quantitative and qualitative evaluations, we also examine the practicality and effects of image enhancement techniques on the performance of common perception tasks.
arXiv Detail & Related papers (2022-04-04T18:48:51Z) - Learning Enriched Illuminants for Cross and Single Sensor Color
Constancy [182.4997117953705]
We propose cross-sensor self-supervised training to train the network.
We train the network by randomly sampling the artificial illuminants in a sensor-independent manner.
Experiments show that our cross-sensor model and single-sensor model outperform other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-03-21T15:45:35Z) - Model-Based Image Signal Processors via Learnable Dictionaries [6.766416093990318]
Digital cameras transform sensor RAW readings into RGB images by means of their Image Signal Processor (ISP)
Recent approaches have attempted to bridge this gap by estimating the RGB to RAW mapping.
We present a novel hybrid model-based and data-driven ISP that is both learnable and interpretable.
arXiv Detail & Related papers (2022-01-10T08:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.