DCT based Fusion of Variable Exposure Images for HDRI
- URL: http://arxiv.org/abs/2110.00312v1
- Date: Fri, 1 Oct 2021 10:55:09 GMT
- Title: DCT based Fusion of Variable Exposure Images for HDRI
- Authors: Vivek Ramakarishnan, Dnyaneshwar Jageshwar Pete
- Abstract summary: We propose a Discrete Cosine Transform (DCT-based) approach for fusing multiple exposure images.
The input image stack is processed in the transform domain by an averaging operation and the inverse transform is performed on the averaged image.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Combining images with different exposure settings are of prime importance in
the field of computational photography. Both transform domain approach and
filtering based approaches are possible for fusing multiple exposure images, to
obtain the well-exposed image. We propose a Discrete Cosine Transform
(DCT-based) approach for fusing multiple exposure images. The input image stack
is processed in the transform domain by an averaging operation and the inverse
transform is performed on the averaged image obtained to generate the fusion of
multiple exposure image. The experimental observation leads us to the
conjecture that the obtained DCT coefficients are indicators of parameters to
measure well-exposedness, contrast and saturation as specified in the
traditional exposure fusion based approach and the averaging performed
indicates equal weights assigned to the DCT coefficients in this non-parametric
and non pyramidal approach to fuse the multiple exposure stack.
Related papers
- Scene-Segmentation-Based Exposure Compensation for Tone Mapping of High Dynamic Range Scenes [8.179779837795754]
We propose a novel scene-segmentation-based exposure compensation method for multiexposure image fusion (MEF) based tone mapping.
Our approach generates a stack of differently exposed images from an input HDR image and fuses them into a single image.
arXiv Detail & Related papers (2024-10-21T04:50:02Z) - A Dual Domain Multi-exposure Image Fusion Network based on the
Spatial-Frequency Integration [57.14745782076976]
Multi-exposure image fusion aims to generate a single high-dynamic image by integrating images with different exposures.
We propose a novelty perspective on multi-exposure image fusion via the Spatial-Frequency Integration Framework, named MEF-SFI.
Our method achieves visual-appealing fusion results against state-of-the-art multi-exposure image fusion approaches.
arXiv Detail & Related papers (2023-12-17T04:45:15Z) - SDDM: Score-Decomposed Diffusion Models on Manifolds for Unpaired
Image-to-Image Translation [96.11061713135385]
This work presents a new score-decomposed diffusion model to explicitly optimize the tangled distributions during image generation.
We equalize the refinement parts of the score function and energy guidance, which permits multi-objective optimization on the manifold.
SDDM outperforms existing SBDM-based methods with much fewer diffusion steps on several I2I benchmarks.
arXiv Detail & Related papers (2023-08-04T06:21:57Z) - DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion [144.9653045465908]
We propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM)
Our approach yields promising fusion results in infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2023-03-13T04:06:42Z) - Dif-Fusion: Towards High Color Fidelity in Infrared and Visible Image
Fusion with Diffusion Models [54.952979335638204]
We propose a novel method with diffusion models, termed as Dif-Fusion, to generate the distribution of the multi-channel input data.
Our method is more effective than other state-of-the-art image fusion methods, especially in color fidelity.
arXiv Detail & Related papers (2023-01-19T13:37:19Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - LADMM-Net: An Unrolled Deep Network For Spectral Image Fusion From
Compressive Data [6.230751621285322]
Hyperspectral (HS) and multispectral (MS) image fusion aims at estimating a high-resolution spectral image from a low-spatial-resolution HS image and a low-spectral-resolution MS image.
In this work, a deep learning architecture under the algorithm unrolling approach is proposed for solving the fusion problem from HS and MS compressive measurements.
arXiv Detail & Related papers (2021-03-01T12:04:42Z) - Multi-focus Image Fusion for Visual Sensor Networks [2.7808182112731528]
Image fusion in visual sensor networks (VSNs) aims to combine information from multiple images of the same scene in order to transform a single image with more information.
Image fusion methods based on discrete cosine transform (DCT) are less complex and time-saving in DCT based standards of image and video.
An efficient algorithm for the fusion of multi-focus images in the DCT domain is proposed.
arXiv Detail & Related papers (2020-09-28T20:39:35Z) - A Novel adaptive optimization of Dual-Tree Complex Wavelet Transform for
Medical Image Fusion [0.0]
multimodal image fusion algorithm based on dual-tree complex wavelet transform (DT-CWT) and adaptive particle swarm optimization (APSO) is proposed.
Experiment results show that the proposed method is remarkably better than the method based on particle swarm optimization.
arXiv Detail & Related papers (2020-07-22T15:34:01Z) - Single Image Brightening via Multi-Scale Exposure Fusion with Hybrid
Learning [48.890709236564945]
A small ISO and a small exposure time are usually used to capture an image in the back or low light conditions.
In this paper, a single image brightening algorithm is introduced to brighten such an image.
The proposed algorithm includes a unique hybrid learning framework to generate two virtual images with large exposure times.
arXiv Detail & Related papers (2020-07-04T08:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.