An Adaptive Spatial-Temporal Local Feature Difference Method for
Infrared Small-moving Target Detection
- URL: http://arxiv.org/abs/2309.02054v1
- Date: Tue, 5 Sep 2023 08:56:20 GMT
- Title: An Adaptive Spatial-Temporal Local Feature Difference Method for
Infrared Small-moving Target Detection
- Authors: Yongkang Zhao, Chuang Zhu, Yuan Li, Shuaishuai Wang, Zihan Lan,
Yuanyuan Qiao
- Abstract summary: We propose a novel method called spatial-temporal local feature difference (STLFD) with adaptive background suppression (ABS)
Our approach utilizes filters in the spatial and temporal domains and performs pixel-level ABS on the output to enhance the contrast between the target and the background.
Our experimental results demonstrate that the proposed method outperforms existing state-of-the-art methods for infrared small-moving target detection.
- Score: 8.466660143185493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting small moving targets accurately in infrared (IR) image sequences is
a significant challenge. To address this problem, we propose a novel method
called spatial-temporal local feature difference (STLFD) with adaptive
background suppression (ABS). Our approach utilizes filters in the spatial and
temporal domains and performs pixel-level ABS on the output to enhance the
contrast between the target and the background. The proposed method comprises
three steps. First, we obtain three temporal frame images based on the current
frame image and extract two feature maps using the designed spatial domain and
temporal domain filters. Next, we fuse the information of the spatial domain
and temporal domain to produce the spatial-temporal feature maps and suppress
noise using our pixel-level ABS module. Finally, we obtain the segmented binary
map by applying a threshold. Our experimental results demonstrate that the
proposed method outperforms existing state-of-the-art methods for infrared
small-moving target detection.
Related papers
- DSLO: Deep Sequence LiDAR Odometry Based on Inconsistent Spatio-temporal Propagation [66.8732965660931]
paper introduces a 3D point cloud sequence learning model based on inconsistent-temporal propagation for LiDAR odometry DSLO.
It consists of a pyramid structure with a sequential pose module, a hierarchical pose refinement module, and a temporal feature propagation module.
arXiv Detail & Related papers (2024-09-01T15:12:48Z) - Triple-domain Feature Learning with Frequency-aware Memory Enhancement for Moving Infrared Small Target Detection [12.641645684148136]
Infrared small target detection presents significant challenges due to target sizes and low contrast against backgrounds.
We propose a new Triple-domain Strategy (Tridos) with frequency-aware memory enhancement on-temporal domain for infrared small target detection.
Inspired by human visual system, our memory enhancement is designed to capture the spatial relations of infrared targets among video frames.
arXiv Detail & Related papers (2024-06-11T05:21:30Z) - Dim Small Target Detection and Tracking: A Novel Method Based on Temporal Energy Selective Scaling and Trajectory Association [8.269449428849867]
In this article, we analyze the difficulty based on spatial features and the feasibility based on temporal features of realizing effective detection.
According to this analysis, we use a multi-frame as a detection unit and propose a detection method based on temporal energy selective scaling (TESS)
For the target-present pixel, the target passing through the pixel will bring a weak transient disturbance on the intensity temporal profiles (ITPs) formed by pixels.
We use a well-designed function to amplify the transient disturbance, suppress the background and noise components, and output the trajectory of the target on the multi-frame detection unit
arXiv Detail & Related papers (2024-05-15T03:02:21Z) - Frequency Perception Network for Camouflaged Object Detection [51.26386921922031]
We propose a novel learnable and separable frequency perception mechanism driven by the semantic hierarchy in the frequency domain.
Our entire network adopts a two-stage model, including a frequency-guided coarse localization stage and a detail-preserving fine localization stage.
Compared with the currently existing models, our proposed method achieves competitive performance in three popular benchmark datasets.
arXiv Detail & Related papers (2023-08-17T11:30:46Z) - DETR4D: Direct Multi-View 3D Object Detection with Sparse Attention [50.11672196146829]
3D object detection with surround-view images is an essential task for autonomous driving.
We propose DETR4D, a Transformer-based framework that explores sparse attention and direct feature query for 3D object detection in multi-view images.
arXiv Detail & Related papers (2022-12-15T14:18:47Z) - Cross-Modality Domain Adaptation for Freespace Detection: A Simple yet
Effective Baseline [21.197212665408262]
Freespace detection aims at classifying each pixel of the image captured by the camera as drivable or non-drivable.
We develop a cross-modality domain adaptation framework which exploits both RGB images and surface normal maps generated from depth images.
To better bridge the domain gap between source domain (synthetic data) and target domain (real-world data), we also propose a Selective Feature Alignment (SFA) module.
arXiv Detail & Related papers (2022-10-06T15:31:49Z) - Position-Aware Relation Learning for RGB-Thermal Salient Object
Detection [3.115635707192086]
We propose a position-aware relation learning network (PRLNet) for RGB-T SOD based on swin transformer.
PRLNet explores the distance and direction relationships between pixels to strengthen intra-class compactness and inter-class separation.
In addition, we constitute a pure transformer encoder-decoder network to enhance multispectral feature representation for RGB-T SOD.
arXiv Detail & Related papers (2022-09-21T07:34:30Z) - Fast Fourier Convolution Based Remote Sensor Image Object Detection for
Earth Observation [0.0]
We propose a Frequency-aware Feature Pyramid Framework (FFPF) for remote sensing object detection.
F-ResNet is proposed to perceive the spectral context information by plugging the frequency domain convolution into each stage of the backbone.
The BSFPN is designed to use a bilateral sampling strategy and skipping connection to better model the association of object features at different scales.
arXiv Detail & Related papers (2022-09-01T15:50:58Z) - Shape Prior Non-Uniform Sampling Guided Real-time Stereo 3D Object
Detection [59.765645791588454]
Recently introduced RTS3D builds an efficient 4D Feature-Consistency Embedding space for the intermediate representation of object without depth supervision.
We propose a shape prior non-uniform sampling strategy that performs dense sampling in outer region and sparse sampling in inner region.
Our proposed method has 2.57% improvement on AP3d almost without extra network parameters.
arXiv Detail & Related papers (2021-06-18T09:14:55Z) - Bi-Dimensional Feature Alignment for Cross-Domain Object Detection [71.85594342357815]
We propose a novel unsupervised cross-domain detection model.
It exploits the annotated data in a source domain to train an object detector for a different target domain.
The proposed model mitigates the cross-domain representation divergence for object detection.
arXiv Detail & Related papers (2020-11-14T03:03:11Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.