Multi-Focus Image Fusion Based on Spatial Frequency(SF) and Consistency
Verification(CV) in DCT Domain
- URL: http://arxiv.org/abs/2305.11265v1
- Date: Thu, 18 May 2023 19:09:32 GMT
- Title: Multi-Focus Image Fusion Based on Spatial Frequency(SF) and Consistency
Verification(CV) in DCT Domain
- Authors: Krishnendu K. S.
- Abstract summary: Wireless Visual Sensor Networks (WVSN) use multi-focus image fusion to create a more accurate output image.
This paper introduces an algorithm that utilizes discrete cosine transform (DCT) standards to fuse multi-focus images in WVSNs.
The results indicate that it improves the visual quality of the output image and outperforms other DCT-based techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-focus is a technique of focusing on different aspects of a particular
object or scene. Wireless Visual Sensor Networks (WVSN) use multi-focus image
fusion, which combines two or more images to create a more accurate output
image that describes the scene better than any individual input image. WVSN has
various applications, including video surveillance, monitoring, and tracking.
Therefore, a high-level analysis of these networks can benefit Biometrics. This
paper introduces an algorithm that utilizes discrete cosine transform (DCT)
standards to fuse multi-focus images in WVSNs. The spatial frequency (SF) of
the corresponding blocks from the source images determines the fusion
criterion. The blocks with higher spatial frequencies make up the DCT
presentation of the fused image, and the Consistency Verification (CV)
procedure is used to enhance the output image quality. The proposed fusion
method was tested on multiple pairs of multi-focus images coded on JPEG
standard to evaluate the fusion performance, and the results indicate that it
improves the visual quality of the output image and outperforms other DCT-based
techniques.
Related papers
- A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding [76.44979557843367]
We propose a novel multi-view stereo (MVS) framework that gets rid of the depth range prior.
We introduce a Multi-view Disparity Attention (MDA) module to aggregate long-range context information.
We explicitly estimate the quality of the current pixel corresponding to sampled points on the epipolar line of the source image.
arXiv Detail & Related papers (2024-11-04T08:50:16Z) - Bridging the Gap between Multi-focus and Multi-modal: A Focused
Integration Framework for Multi-modal Image Fusion [5.417493475406649]
Multi-modal image fusion (MMIF) integrates valuable information from different modality images into a fused one.
This paper proposes a MMIF framework for joint focused integration and modalities information extraction.
The proposed algorithm can surpass the state-of-the-art methods in visual perception and quantitative evaluation.
arXiv Detail & Related papers (2023-11-03T12:58:39Z) - Multi-Spectral Image Stitching via Spatial Graph Reasoning [52.27796682972484]
We propose a spatial graph reasoning based multi-spectral image stitching method.
We embed multi-scale complementary features from the same view position into a set of nodes.
By introducing long-range coherence along spatial and channel dimensions, the complementarity of pixel relations and channel interdependencies aids in the reconstruction of aligned multi-view features.
arXiv Detail & Related papers (2023-07-31T15:04:52Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - Multi-scale frequency separation network for image deblurring [10.511076996096117]
We present a new method called multi-scale frequency separation network (MSFS-Net) for image deblurring.
MSFS-Net captures the low and high-frequency information of image at multiple scales.
Experiments on benchmark datasets show that the proposed network achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-06-01T23:48:35Z) - Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image
Super-Resolution with Subpixel Fusion [67.35540259040806]
We propose a subpixel-level HS super-resolution framework by devising a novel decoupled-and-coupled network, called DCNet.
As the name suggests, DC-Net first decouples the input into common (or cross-sensor) and sensor-specific components.
We append a self-supervised learning module behind the CSU net by guaranteeing the material consistency to enhance the detailed appearances of the restored HS product.
arXiv Detail & Related papers (2022-05-07T23:40:36Z) - LocalTrans: A Multiscale Local Transformer Network for Cross-Resolution
Homography Estimation [52.63874513999119]
Cross-resolution image alignment is a key problem in multiscale giga photography.
Existing deep homography methods neglecting the explicit formulation of correspondences between them, which leads to degraded accuracy in cross-resolution challenges.
We propose a local transformer network embedded within a multiscale structure to explicitly learn correspondences between the multimodal inputs.
arXiv Detail & Related papers (2021-06-08T02:51:45Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Multi-focus Image Fusion for Visual Sensor Networks [2.7808182112731528]
Image fusion in visual sensor networks (VSNs) aims to combine information from multiple images of the same scene in order to transform a single image with more information.
Image fusion methods based on discrete cosine transform (DCT) are less complex and time-saving in DCT based standards of image and video.
An efficient algorithm for the fusion of multi-focus images in the DCT domain is proposed.
arXiv Detail & Related papers (2020-09-28T20:39:35Z) - MFIF-GAN: A New Generative Adversarial Network for Multi-Focus Image
Fusion [29.405149234582623]
Multi-Focus Image Fusion (MFIF) is a promising technique to obtain all-in-focus images.
One of the research trends of MFIF is to avoid the defocus spread effect (DSE) around the focus/defocus boundary (FDB)
We propose a network termed MFIF-GAN to generate focus maps in which the foreground region are correctly larger than the corresponding objects.
arXiv Detail & Related papers (2020-09-21T09:36:34Z) - FFusionCGAN: An end-to-end fusion method for few-focus images using
conditional GAN in cytopathological digital slides [0.0]
Multi-focus image fusion technologies compress different focus depth images into an image in which most objects are in focus.
This paper proposes a novel method for generating fused images from single-focus or few-focus images based on conditional generative adversarial network (GAN)
By integrating the network into the generative model, the quality of the generated fused images is effectively improved.
arXiv Detail & Related papers (2020-01-03T02:13:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.