Multi-focus Image Fusion for Visual Sensor Networks
- URL: http://arxiv.org/abs/2009.13615v3
- Date: Fri, 2 Oct 2020 18:04:32 GMT
- Title: Multi-focus Image Fusion for Visual Sensor Networks
- Authors: Milad Abdollahzadeh, Touba Malekzadeh, Hadi Seyedarabi
- Abstract summary: Image fusion in visual sensor networks (VSNs) aims to combine information from multiple images of the same scene in order to transform a single image with more information.
Image fusion methods based on discrete cosine transform (DCT) are less complex and time-saving in DCT based standards of image and video.
An efficient algorithm for the fusion of multi-focus images in the DCT domain is proposed.
- Score: 2.7808182112731528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image fusion in visual sensor networks (VSNs) aims to combine information
from multiple images of the same scene in order to transform a single image
with more information. Image fusion methods based on discrete cosine transform
(DCT) are less complex and time-saving in DCT based standards of image and
video which makes them more suitable for VSN applications. In this paper, an
efficient algorithm for the fusion of multi-focus images in the DCT domain is
proposed. The Sum of modified laplacian (SML) of corresponding blocks of source
images is used as a contrast criterion and blocks with the larger value of SML
are absorbed to output images. The experimental results on several images show
the improvement of the proposed algorithm in terms of both subjective and
objective quality of fused image relative to other DCT based techniques.
Related papers
- A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding [76.44979557843367]
We propose a novel multi-view stereo (MVS) framework that gets rid of the depth range prior.
We introduce a Multi-view Disparity Attention (MDA) module to aggregate long-range context information.
We explicitly estimate the quality of the current pixel corresponding to sampled points on the epipolar line of the source image.
arXiv Detail & Related papers (2024-11-04T08:50:16Z) - Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - Bridging the Gap between Multi-focus and Multi-modal: A Focused
Integration Framework for Multi-modal Image Fusion [5.417493475406649]
Multi-modal image fusion (MMIF) integrates valuable information from different modality images into a fused one.
This paper proposes a MMIF framework for joint focused integration and modalities information extraction.
The proposed algorithm can surpass the state-of-the-art methods in visual perception and quantitative evaluation.
arXiv Detail & Related papers (2023-11-03T12:58:39Z) - Multi-Focus Image Fusion Based on Spatial Frequency(SF) and Consistency
Verification(CV) in DCT Domain [0.0]
Wireless Visual Sensor Networks (WVSN) use multi-focus image fusion to create a more accurate output image.
This paper introduces an algorithm that utilizes discrete cosine transform (DCT) standards to fuse multi-focus images in WVSNs.
The results indicate that it improves the visual quality of the output image and outperforms other DCT-based techniques.
arXiv Detail & Related papers (2023-05-18T19:09:32Z) - LocalTrans: A Multiscale Local Transformer Network for Cross-Resolution
Homography Estimation [52.63874513999119]
Cross-resolution image alignment is a key problem in multiscale giga photography.
Existing deep homography methods neglecting the explicit formulation of correspondences between them, which leads to degraded accuracy in cross-resolution challenges.
We propose a local transformer network embedded within a multiscale structure to explicitly learn correspondences between the multimodal inputs.
arXiv Detail & Related papers (2021-06-08T02:51:45Z) - LADMM-Net: An Unrolled Deep Network For Spectral Image Fusion From
Compressive Data [6.230751621285322]
Hyperspectral (HS) and multispectral (MS) image fusion aims at estimating a high-resolution spectral image from a low-spatial-resolution HS image and a low-spectral-resolution MS image.
In this work, a deep learning architecture under the algorithm unrolling approach is proposed for solving the fusion problem from HS and MS compressive measurements.
arXiv Detail & Related papers (2021-03-01T12:04:42Z) - Deep Convolutional Sparse Coding Networks for Image Fusion [29.405149234582623]
Deep learning has emerged as an important tool for image fusion.
This paper presents three deep convolutional sparse coding (CSC) networks for three kinds of image fusion tasks.
arXiv Detail & Related papers (2020-05-18T04:12:01Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - FFusionCGAN: An end-to-end fusion method for few-focus images using
conditional GAN in cytopathological digital slides [0.0]
Multi-focus image fusion technologies compress different focus depth images into an image in which most objects are in focus.
This paper proposes a novel method for generating fused images from single-focus or few-focus images based on conditional generative adversarial network (GAN)
By integrating the network into the generative model, the quality of the generated fused images is effectively improved.
arXiv Detail & Related papers (2020-01-03T02:13:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.