SAMF: Small-Area-Aware Multi-focus Image Fusion for Object Detection
- URL: http://arxiv.org/abs/2401.08357v2
- Date: Wed, 31 Jan 2024 12:18:10 GMT
- Title: SAMF: Small-Area-Aware Multi-focus Image Fusion for Object Detection
- Authors: Xilai Li, Xiaosong Li, Haishu Tan, Jinyang Li
- Abstract summary: Existing multi-focus image fusion (MFIF) methods often fail to preserve the uncertain transition region.
This study proposes a new small-area-aware MFIF algorithm for enhancing object detection capability.
- Score: 6.776991635789825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing multi-focus image fusion (MFIF) methods often fail to preserve the
uncertain transition region and detect small focus areas within large defocused
regions accurately. To address this issue, this study proposes a new
small-area-aware MFIF algorithm for enhancing object detection capability.
First, we enhance the pixel attributes within the small focus and boundary
regions, which are subsequently combined with visual saliency detection to
obtain the pre-fusion results used to discriminate the distribution of focused
pixels. To accurately ensure pixel focus, we consider the source image as a
combination of focused, defocused, and uncertain regions and propose a
three-region segmentation strategy. Finally, we design an effective pixel
selection rule to generate segmentation decision maps and obtain the final
fusion results. Experiments demonstrated that the proposed method can
accurately detect small and smooth focus areas while improving object detection
performance, outperforming existing methods in both subjective and objective
evaluations. The source code is available at https://github.com/ixilai/SAMF.
Related papers
- A Novel Defocus-Blur Region Detection Approach Based on DCT Feature and
PCNN Structure [4.086098684345016]
This research proposes a novel and hybrid-focused detection approach based on Discrete Cosine Transform (DCT) coefficients and PC Neural Net (PCNN) structure.
The visual and quantitative evaluation illustrates that the proposed approach outperformed in terms of accuracy and efficiency to referenced algorithms.
arXiv Detail & Related papers (2023-10-12T10:58:10Z) - Centralized Feature Pyramid for Object Detection [53.501796194901964]
Visual feature pyramid has shown its superiority in both effectiveness and efficiency in a wide range of applications.
In this paper, we propose a OLO Feature Pyramid for object detection, which is based on a globally explicit centralized feature regulation.
arXiv Detail & Related papers (2022-10-05T08:32:54Z) - Center Feature Fusion: Selective Multi-Sensor Fusion of Center-based
Objects [26.59231069298659]
We propose a novel approach for building robust 3D object detection systems for autonomous vehicles.
We leverage center-based detection networks in both the camera and LiDAR streams to identify relevant object locations.
On the nuScenes dataset, we outperform the LiDAR-only baseline by 4.9% mAP while fusing up to 100x fewer features than other fusion methods.
arXiv Detail & Related papers (2022-09-26T17:51:18Z) - AF$_2$: Adaptive Focus Framework for Aerial Imagery Segmentation [86.44683367028914]
Aerial imagery segmentation has some unique challenges, the most critical one among which lies in foreground-background imbalance.
We propose Adaptive Focus Framework (AF$), which adopts a hierarchical segmentation procedure and focuses on adaptively utilizing multi-scale representations.
AF$ has significantly improved the accuracy on three widely used aerial benchmarks, as fast as the mainstream method.
arXiv Detail & Related papers (2022-02-18T10:14:45Z) - Point-Level Region Contrast for Object Detection Pre-Training [147.47349344401806]
We present point-level region contrast, a self-supervised pre-training approach for the task of object detection.
Our approach performs contrastive learning by directly sampling individual point pairs from different regions.
Compared to an aggregated representation per region, our approach is more robust to the change in input region quality.
arXiv Detail & Related papers (2022-02-09T18:56:41Z) - Towards Reducing Severe Defocus Spread Effects for Multi-Focus Image
Fusion via an Optimization Based Strategy [22.29205225281694]
Multi-focus image fusion (MFF) is a popular technique to generate an all-in-focus image.
This paper presents an optimization-based approach to reduce defocus spread effects.
Experiments conducted on the real-world dataset verify superiority of the proposed model.
arXiv Detail & Related papers (2020-12-29T09:26:41Z) - Addressing Visual Search in Open and Closed Set Settings [8.928169373673777]
We present a method for predicting pixel-level objectness from a low resolution gist image.
We then use to select regions for performing object detection locally at high resolution.
Second, we propose a novel strategy for open-set visual search that seeks to find all instances of a target class which may be previously unseen.
arXiv Detail & Related papers (2020-12-11T17:21:28Z) - MFIF-GAN: A New Generative Adversarial Network for Multi-Focus Image
Fusion [29.405149234582623]
Multi-Focus Image Fusion (MFIF) is a promising technique to obtain all-in-focus images.
One of the research trends of MFIF is to avoid the defocus spread effect (DSE) around the focus/defocus boundary (FDB)
We propose a network termed MFIF-GAN to generate focus maps in which the foreground region are correctly larger than the corresponding objects.
arXiv Detail & Related papers (2020-09-21T09:36:34Z) - Every Pixel Matters: Center-aware Feature Alignment for Domain Adaptive
Object Detector [95.51517606475376]
A domain adaptive object detector aims to adapt itself to unseen domains that may contain variations of object appearance, viewpoints or backgrounds.
We propose a domain adaptation framework that accounts for each pixel via predicting pixel-wise objectness and centerness.
arXiv Detail & Related papers (2020-08-19T17:57:03Z) - Rethinking of the Image Salient Object Detection: Object-level Semantic
Saliency Re-ranking First, Pixel-wise Saliency Refinement Latter [62.26677215668959]
We propose a lightweight, weakly supervised deep network to coarsely locate semantically salient regions.
We then fuse multiple off-the-shelf deep models on these semantically salient regions as the pixel-wise saliency refinement.
Our method is simple yet effective, which is the first attempt to consider the salient object detection mainly as an object-level semantic re-ranking problem.
arXiv Detail & Related papers (2020-08-10T07:12:43Z) - Saliency Enhancement using Gradient Domain Edges Merging [65.90255950853674]
We develop a method to merge the edges with the saliency maps to improve the performance of the saliency.
This leads to our proposed saliency enhancement using edges (SEE) with an average improvement of at least 3.4 times higher on the DUT-OMRON dataset.
The SEE algorithm is split into 2 parts, SEE-Pre for preprocessing and SEE-Post pour postprocessing.
arXiv Detail & Related papers (2020-02-11T14:04:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.