Image Enhancement using Fuzzy Intensity Measure and Adaptive Clipping
Histogram Equalization
- URL: http://arxiv.org/abs/2101.05922v1
- Date: Fri, 15 Jan 2021 00:59:55 GMT
- Title: Image Enhancement using Fuzzy Intensity Measure and Adaptive Clipping
Histogram Equalization
- Authors: Xiangyuan Zhu, Xiaoming Xiao, Tardi Tjahjadi, Zhihu Wu, Jin Tang
- Abstract summary: fuzzy intensity measure and adaptive clipping histogram equalization (FIMHE) proposed.
Experiments on Berkeley database and CVF-UGR-Image database show that FIMHE outperforms state-of-the-art histogram equalization based methods.
- Score: 21.963436654053226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image enhancement aims at processing an input image so that the visual
content of the output image is more pleasing or more useful for certain
applications. Although histogram equalization is widely used in image
enhancement due to its simplicity and effectiveness, it changes the mean
brightness of the enhanced image and introduces a high level of noise and
distortion. To address these problems, this paper proposes image enhancement
using fuzzy intensity measure and adaptive clipping histogram equalization
(FIMHE). FIMHE uses fuzzy intensity measure to first segment the histogram of
the original image, and then clip the histogram adaptively in order to prevent
excessive image enhancement. Experiments on the Berkeley database and
CVF-UGR-Image database show that FIMHE outperforms state-of-the-art histogram
equalization based methods.
Related papers
- Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation [54.96563068182733]
We propose Modality Adaptation with text-to-image Diffusion Models (MADM) for semantic segmentation task.
MADM utilizes text-to-image diffusion models pre-trained on extensive image-text pairs to enhance the model's cross-modality capabilities.
We show that MADM achieves state-of-the-art adaptation performance across various modality tasks, including images to depth, infrared, and event modalities.
arXiv Detail & Related papers (2024-10-29T03:49:40Z) - Frequency-Guided Masking for Enhanced Vision Self-Supervised Learning [49.275450836604726]
We present a novel frequency-based Self-Supervised Learning (SSL) approach that significantly enhances its efficacy for pre-training.
We employ a two-branch framework empowered by knowledge distillation, enabling the model to take both the filtered and original images as input.
arXiv Detail & Related papers (2024-09-16T15:10:07Z) - Image-GS: Content-Adaptive Image Representation via 2D Gaussians [55.15950594752051]
We propose Image-GS, a content-adaptive image representation.
Using anisotropic 2D Gaussians as the basis, Image-GS shows high memory efficiency, supports fast random access, and offers a natural level of detail stack.
General efficiency and fidelity of Image-GS are validated against several recent neural image representations and industry-standard texture compressors.
We hope this research offers insights for developing new applications that require adaptive quality and resource control, such as machine perception, asset streaming, and content generation.
arXiv Detail & Related papers (2024-07-02T00:45:21Z) - Inhomogeneous illumination image enhancement under ex-tremely low visibility condition [3.534798835599242]
Imaging through dense fog presents unique challenges, with essential visual information crucial for applications like object detection and recognition obscured, thereby hindering conventional image processing methods.
We introduce in this paper a novel method that adaptively filters background illumination based on Structural Differential and Integral Filtering (F) to enhance only vital signal information.
Our findings demonstrate that our proposed method significantly enhances signal clarity under extremely low visibility conditions and out-performs existing techniques, offering substantial improvements for deep fog imaging applications.
arXiv Detail & Related papers (2024-04-26T16:09:42Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Reflectance-Oriented Probabilistic Equalization for Image Enhancement [28.180598784444605]
We propose a novel 2D histogram equalization approach.
It assumes intensity occurrence and co-occurrence to be dependent on each other and derives the distribution of intensity occurrence.
It can sufficiently improve the brightness of low-light images while avoiding over-enhancement in normal-light images.
arXiv Detail & Related papers (2022-09-14T04:20:06Z) - Reflectance-Guided, Contrast-Accumulated Histogram Equalization [31.060143365318623]
We propose a histogram equalization-based method that adapts to the data-dependent requirements of brightness enhancement.
This method incorporates the spatial information provided by image context in density estimation for discriminative histogram equalization.
arXiv Detail & Related papers (2022-09-14T04:14:30Z) - Underwater Image Enhancement Using Convolutional Neural Network [1.1602089225841632]
Histogram equalization is a technique for adjusting image intensities to enhance contrast.
The colours of the image are retained using a convolutional neural network model which is trained by the datasets of underwater images.
arXiv Detail & Related papers (2021-09-18T12:01:14Z) - DeepSim: Semantic similarity metrics for learned image registration [6.789370732159177]
We propose a semantic similarity metric for image registration.
Our approach learns dataset-specific features that drive the optimization of a learning-based registration model.
arXiv Detail & Related papers (2020-11-11T12:35:07Z) - Single Image Brightening via Multi-Scale Exposure Fusion with Hybrid
Learning [48.890709236564945]
A small ISO and a small exposure time are usually used to capture an image in the back or low light conditions.
In this paper, a single image brightening algorithm is introduced to brighten such an image.
The proposed algorithm includes a unique hybrid learning framework to generate two virtual images with large exposure times.
arXiv Detail & Related papers (2020-07-04T08:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.