RealRep: Generalized SDR-to-HDR Conversion via Attribute-Disentangled Representation Learning
- URL: http://arxiv.org/abs/2505.07322v2
- Date: Tue, 09 Sep 2025 10:18:32 GMT
- Title: RealRep: Generalized SDR-to-HDR Conversion via Attribute-Disentangled Representation Learning
- Authors: Gang He, Siqi Wang, Kepeng Xu, Lin Zhang, Li Xu, Weiran Wang, Yu-Wing Tai,
- Abstract summary: High-Dynamic-Range Wide-Color-Gamut (WCG) technology is becoming increasingly widespread, driving a growing need for converting Standard Dynamic Range (SDR) content to HDR.<n>Existing methods rely on fixed tone mapping operators, which struggle to handle the diverse appearances and degradations commonly present in real-world SDR content.<n>We propose a generalized SDR-to- attribute framework that enhances robustness by learning construct-disentangled representations.
- Score: 51.19027658873778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-Dynamic-Range Wide-Color-Gamut (HDR-WCG) technology is becoming increasingly widespread, driving a growing need for converting Standard Dynamic Range (SDR) content to HDR. Existing methods primarily rely on fixed tone mapping operators, which struggle to handle the diverse appearances and degradations commonly present in real-world SDR content. To address this limitation, we propose a generalized SDR-to-HDR framework that enhances robustness by learning attribute-disentangled representations. Central to our approach is Realistic Attribute-Disentangled Representation Learning (RealRep), which explicitly disentangles luminance and chrominance components to capture intrinsic content variations across different SDR distributions. Furthermore, we design a Luma-/Chroma-aware negative exemplar generation strategy that constructs degradation-sensitive contrastive pairs, effectively modeling tone discrepancies across SDR styles. Building on these attribute-level priors, we introduce the Degradation-Domain Aware Controlled Mapping Network (DDACMNet), a lightweight, two-stage framework that performs adaptive hierarchical mapping guided by a control-aware normalization mechanism. DDACMNet dynamically modulates the mapping process via degradation-conditioned features, enabling robust adaptation across diverse degradation domains. Extensive experiments demonstrate that RealRep consistently outperforms state-of-the-art methods in both generalization and perceptually faithful HDR color gamut reconstruction.
Related papers
- Scale Equivariance Regularization and Feature Lifting in High Dynamic Range Modulo Imaging [19.49437461280304]
This work proposes a learning-based HDR restoration framework.<n>It incorporates two key strategies: (i) a scale-equivariant regularization that enforces consistency under exposure variations, and (ii) a feature lifting input design combining the raw modulo image, wrapped finite differences.
arXiv Detail & Related papers (2026-01-30T14:45:29Z) - Dual-domain Adaptation Networks for Realistic Image Super-resolution [81.34345637776408]
Realistic image super-resolution (SR) focuses on transforming real-world low-resolution (LR) images into high-resolution (HR) ones.<n>Current methods struggle with limited real-world LR-HR data, impacting the learning of basic image features.<n>We introduce a novel approach, which is able to efficiently adapt pre-trained image SR models from simulated to real-world datasets.
arXiv Detail & Related papers (2025-11-21T12:57:23Z) - Rotation Equivariant Arbitrary-scale Image Super-Resolution [62.41329042683779]
The arbitrary-scale image super-resolution (ASISR) aims to achieve arbitrary-scale high-resolution recoveries from a low-resolution input image.<n>We make efforts to construct a rotation equivariant ASISR method in this study.
arXiv Detail & Related papers (2025-08-07T08:51:03Z) - Unsupervised Image Super-Resolution Reconstruction Based on Real-World Degradation Patterns [4.977925450373957]
We propose a novel TripleGAN framework for training super-resolution reconstruction models.<n>The framework learns real-world degradation patterns from LR observations and synthesizes datasets with corresponding degradation characteristics.<n>Our method exhibits clear advantages in quantitative metrics while maintaining sharp reconstructions without over-smoothing artifacts.
arXiv Detail & Related papers (2025-06-20T14:24:48Z) - Manifold-aware Representation Learning for Degradation-agnostic Image Restoration [135.90908995927194]
Image Restoration (IR) aims to recover high quality images from degraded inputs affected by various corruptions such as noise, blur, haze, rain, and low light conditions.<n>We present MIRAGE, a unified framework for all in one IR that explicitly decomposes the input feature space into three semantically aligned parallel branches.<n>This modular decomposition significantly improves generalization and efficiency across diverse degradations.
arXiv Detail & Related papers (2025-05-24T12:52:10Z) - Semantic Aware Diffusion Inverse Tone Mapping [5.65968650127342]
Inverse tone mapping attempts to boost captured Standard Dynamic Range (SDR) images back to High Dynamic Range ( HDR)
We present a novel inverse tone mapping approach for mapping SDR images to HDR that generates lost details in clipped regions through a semantic-aware diffusion based inpainting approach.
arXiv Detail & Related papers (2024-05-24T11:44:22Z) - Efficient Real-world Image Super-Resolution Via Adaptive Directional Gradient Convolution [80.85121353651554]
We introduce kernel-wise differential operations within the convolutional kernel and develop several learnable directional gradient convolutions.
These convolutions are integrated in parallel with a novel linear weighting mechanism to form an Adaptive Directional Gradient Convolution (DGConv)
We further devise an Adaptive Information Interaction Block (AIIBlock) to adeptly balance the enhancement of texture and contrast while meticulously investigating the interdependencies, culminating in the creation of a DGPNet for Real-SR through simple stacking.
arXiv Detail & Related papers (2024-05-11T14:21:40Z) - FastHDRNet: A new efficient method for SDR-to-HDR Translation [5.224011800476952]
We propose a neural network for SDR to HDR conversion, termed "FastNet"
The architecture is designed as a lightweight network that utilizes global statistics and local information with super high efficiency.
arXiv Detail & Related papers (2024-04-06T03:25:24Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - DSR-Diff: Depth Map Super-Resolution with Diffusion Model [38.68563026759223]
We present a novel CDSR paradigm that utilizes a diffusion model within the latent space to generate guidance for depth map super-resolution.
Our proposed method has shown superior performance in extensive experiments when compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-11-16T14:18:10Z) - Style-Hallucinated Dual Consistency Learning: A Unified Framework for
Visual Domain Generalization [113.03189252044773]
We propose a unified framework, Style-HAllucinated Dual consistEncy learning (SHADE), to handle domain shift in various visual tasks.
Our versatile SHADE can significantly enhance the generalization in various visual recognition tasks, including image classification, semantic segmentation and object detection.
arXiv Detail & Related papers (2022-12-18T11:42:51Z) - Modality-Adaptive Mixup and Invariant Decomposition for RGB-Infrared
Person Re-Identification [84.32086702849338]
We propose a novel modality-adaptive mixup and invariant decomposition (MID) approach for RGB-infrared person re-identification.
MID designs a modality-adaptive mixup scheme to generate suitable mixed modality images between RGB and infrared images.
Experiments on two challenging benchmarks demonstrate superior performance of MID over state-of-the-art methods.
arXiv Detail & Related papers (2022-03-03T14:26:49Z) - Invertible Tone Mapping with Selectable Styles [19.03179521805971]
In this paper, we propose an invertible tone mapping method that converts the multi-exposure HDR to a true LDR.
Our invertible LDR can mimic the appearance of a user-selected tone mapping style.
It can be shared over any existing social network platforms that may re-encode or format-convert the uploaded images.
arXiv Detail & Related papers (2021-10-09T07:32:36Z) - High-resolution Depth Maps Imaging via Attention-based Hierarchical
Multi-modal Fusion [84.24973877109181]
We propose a novel attention-based hierarchical multi-modal fusion network for guided DSR.
We show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.
arXiv Detail & Related papers (2021-04-04T03:28:33Z) - MetaHDR: Model-Agnostic Meta-Learning for HDR Image Reconstruction [0.0]
Existing approaches for converting low dynamic range images to high dynamic range images are limited by the assumption that all conversions are governed by the same nonlinear mapping.
We propose "Meta-Learning for HDR-Agnostic Image Reconstruction" (Meta), which applies meta-learning to the LDR-to-Model conversion problem using existing HDR datasets.
arXiv Detail & Related papers (2021-03-20T07:56:45Z) - DDet: Dual-path Dynamic Enhancement Network for Real-World Image
Super-Resolution [69.2432352477966]
Real image super-resolution(Real-SR) focus on the relationship between real-world high-resolution(HR) and low-resolution(LR) image.
In this article, we propose a Dual-path Dynamic Enhancement Network(DDet) for Real-SR.
Unlike conventional methods which stack up massive convolutional blocks for feature representation, we introduce a content-aware framework to study non-inherently aligned image pair.
arXiv Detail & Related papers (2020-02-25T18:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.