Reflectance-Guided, Contrast-Accumulated Histogram Equalization
- URL: http://arxiv.org/abs/2209.06405v1
- Date: Wed, 14 Sep 2022 04:14:30 GMT
- Title: Reflectance-Guided, Contrast-Accumulated Histogram Equalization
- Authors: Xiaomeng Wu, Takahito Kawanishi, Kunio Kashino
- Abstract summary: We propose a histogram equalization-based method that adapts to the data-dependent requirements of brightness enhancement.
This method incorporates the spatial information provided by image context in density estimation for discriminative histogram equalization.
- Score: 31.060143365318623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing image enhancement methods fall short of expectations because with
them it is difficult to improve global and local image contrast simultaneously.
To address this problem, we propose a histogram equalization-based method that
adapts to the data-dependent requirements of brightness enhancement and
improves the visibility of details without losing the global contrast. This
method incorporates the spatial information provided by image context in
density estimation for discriminative histogram equalization. To minimize the
adverse effect of non-uniform illumination, we propose defining spatial
information on the basis of image reflectance estimated with edge preserving
smoothing. Our method works particularly well for determining how the
background brightness should be adaptively adjusted and for revealing useful
image details hidden in the dark.
Related papers
- Data Augmentation via Latent Diffusion for Saliency Prediction [67.88936624546076]
Saliency prediction models are constrained by the limited diversity and quantity of labeled data.
We propose a novel data augmentation method for deep saliency prediction that edits natural images while preserving the complexity and variability of real-world scenes.
arXiv Detail & Related papers (2024-09-11T14:36:24Z) - Inhomogeneous illumination image enhancement under ex-tremely low visibility condition [3.534798835599242]
Imaging through dense fog presents unique challenges, with essential visual information crucial for applications like object detection and recognition obscured, thereby hindering conventional image processing methods.
We introduce in this paper a novel method that adaptively filters background illumination based on Structural Differential and Integral Filtering (F) to enhance only vital signal information.
Our findings demonstrate that our proposed method significantly enhances signal clarity under extremely low visibility conditions and out-performs existing techniques, offering substantial improvements for deep fog imaging applications.
arXiv Detail & Related papers (2024-04-26T16:09:42Z) - Revealing Shadows: Low-Light Image Enhancement Using Self-Calibrated
Illumination [4.913568097686369]
Self-Calibrated Illumination (SCI) is a strategy initially developed for RGB images.
We employ the SCI method to intensify and clarify details that are typically lost in low-light conditions.
This method of selective illumination enhancement leaves the color information intact, thus preserving the color integrity of the image.
arXiv Detail & Related papers (2023-12-23T08:49:19Z) - Layered Rendering Diffusion Model for Zero-Shot Guided Image Synthesis [60.260724486834164]
This paper introduces innovative solutions to enhance spatial controllability in diffusion models reliant on text queries.
We present two key innovations: Vision Guidance and the Layered Rendering Diffusion framework.
We apply our method to three practical applications: bounding box-to-image, semantic mask-to-image and image editing.
arXiv Detail & Related papers (2023-11-30T10:36:19Z) - Dimma: Semi-supervised Low Light Image Enhancement with Adaptive Dimming [0.728258471592763]
Enhancing low-light images while maintaining natural colors is a challenging problem due to camera processing variations.
We propose Dimma, a semi-supervised approach that aligns with any camera by utilizing a small set of image pairs.
We achieve that by introducing a convolutional mixture density network that generates distorted colors of the scene based on the illumination differences.
arXiv Detail & Related papers (2023-10-14T17:59:46Z) - LUT-GCE: Lookup Table Global Curve Estimation for Fast Low-light Image
Enhancement [62.17015413594777]
We present an effective and efficient approach for low-light image enhancement, named LUT-GCE.
We estimate a global curve for the entire image that allows corrections for both under- and over-exposure.
Our approach outperforms the state of the art in terms of inference speed, especially on high-definition images (e.g., 1080p and 4k)
arXiv Detail & Related papers (2023-06-12T12:53:06Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Reflectance-Oriented Probabilistic Equalization for Image Enhancement [28.180598784444605]
We propose a novel 2D histogram equalization approach.
It assumes intensity occurrence and co-occurrence to be dependent on each other and derives the distribution of intensity occurrence.
It can sufficiently improve the brightness of low-light images while avoiding over-enhancement in normal-light images.
arXiv Detail & Related papers (2022-09-14T04:20:06Z) - Image Harmonization with Region-wise Contrastive Learning [51.309905690367835]
We propose a novel image harmonization framework with external style fusion and region-wise contrastive learning scheme.
Our method attempts to bring together corresponding positive and negative samples by maximizing the mutual information between the foreground and background styles.
arXiv Detail & Related papers (2022-05-27T15:46:55Z) - StyLitGAN: Prompting StyleGAN to Produce New Illumination Conditions [1.933681537640272]
We propose a novel method, StyLitGAN, for relighting and resurfacing generated images in the absence of labeled data.
Our approach generates images with realistic lighting effects, including cast shadows, soft shadows, inter-reflections, and glossy effects, without the need for paired or CGI data.
arXiv Detail & Related papers (2022-05-20T17:59:40Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.