TRACER: Extreme Attention Guided Salient Object Tracing Network
- URL: http://arxiv.org/abs/2112.07380v1
- Date: Tue, 14 Dec 2021 13:20:07 GMT
- Title: TRACER: Extreme Attention Guided Salient Object Tracing Network
- Authors: Min Seok Lee, WooSeok Shin, and Sung Won Han
- Abstract summary: We propose TRACER, which detects salient objects with explicit edges by incorporating attention guided tracing modules.
A comparison with 13 existing methods reveals that TRACER achieves state-of-the-art performance on five benchmark datasets.
- Score: 3.2434811678562676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing studies on salient object detection (SOD) focus on extracting
distinct objects with edge information and aggregating multi-level features to
improve SOD performance. To achieve satisfactory performance, the methods
employ refined edge information and low multi-level discrepancy. However, both
performance gain and computational efficiency cannot be attained, which has
motivated us to study the inefficiencies in existing encoder-decoder structures
to avoid this trade-off. We propose TRACER, which detects salient objects with
explicit edges by incorporating attention guided tracing modules. We employ a
masked edge attention module at the end of the first encoder using a fast
Fourier transform to propagate the refined edge information to the downstream
feature extraction. In the multi-level aggregation phase, the union attention
module identifies the complementary channel and important spatial information.
To improve the decoder performance and computational efficiency, we minimize
the decoder block usage with object attention module. This module extracts
undetected objects and edge information from refined channels and spatial
representations. Subsequently, we propose an adaptive pixel intensity loss
function to deal with the relatively important pixels unlike conventional loss
functions which treat all pixels equally. A comparison with 13 existing methods
reveals that TRACER achieves state-of-the-art performance on five benchmark
datasets. In particular, TRACER-Efficient3 (TE3) outperforms LDF, an existing
method while requiring 1.8x fewer learning parameters and less time; TE3 is 5x
faster.
Related papers
- EffiPerception: an Efficient Framework for Various Perception Tasks [6.1522068855729755]
EffiPerception is a framework to explore common learning patterns and increase the module.
It could achieve great accuracy robustness with relatively low memory cost under several perception tasks.
EffiPerception could show great accuracy-speed-memory overall performance increase within the four detection and segmentation tasks.
arXiv Detail & Related papers (2024-03-18T23:22:37Z) - Cross-Cluster Shifting for Efficient and Effective 3D Object Detection
in Autonomous Driving [69.20604395205248]
We present a new 3D point-based detector model, named Shift-SSD, for precise 3D object detection in autonomous driving.
We introduce an intriguing Cross-Cluster Shifting operation to unleash the representation capacity of the point-based detector.
We conduct extensive experiments on the KITTI, runtime, and nuScenes datasets, and the results demonstrate the state-of-the-art performance of Shift-SSD.
arXiv Detail & Related papers (2024-03-10T10:36:32Z) - UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation [93.88170217725805]
We propose a 3D medical image segmentation approach, named UNETR++, that offers both high-quality segmentation masks as well as efficiency in terms of parameters, compute cost, and inference speed.
The core of our design is the introduction of a novel efficient paired attention (EPA) block that efficiently learns spatial and channel-wise discriminative features.
Our evaluations on five benchmarks, Synapse, BTCV, ACDC, BRaTs, and Decathlon-Lung, reveal the effectiveness of our contributions in terms of both efficiency and accuracy.
arXiv Detail & Related papers (2022-12-08T18:59:57Z) - Ret3D: Rethinking Object Relations for Efficient 3D Object Detection in
Driving Scenes [82.4186966781934]
We introduce a simple, efficient, and effective two-stage detector, termed as Ret3D.
At the core of Ret3D is the utilization of novel intra-frame and inter-frame relation modules.
With negligible extra overhead, Ret3D achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-08-18T03:48:58Z) - SALISA: Saliency-based Input Sampling for Efficient Video Object
Detection [58.22508131162269]
We propose SALISA, a novel non-uniform SALiency-based Input SAmpling technique for video object detection.
We show that SALISA significantly improves the detection of small objects.
arXiv Detail & Related papers (2022-04-05T17:59:51Z) - EPMF: Efficient Perception-aware Multi-sensor Fusion for 3D Semantic Segmentation [62.210091681352914]
We study multi-sensor fusion for 3D semantic segmentation for many applications, such as autonomous driving and robotics.
In this work, we investigate a collaborative fusion scheme called perception-aware multi-sensor fusion (PMF)
We propose a two-stream network to extract features from the two modalities separately. The extracted features are fused by effective residual-based fusion modules.
arXiv Detail & Related papers (2021-06-21T10:47:26Z) - CE-FPN: Enhancing Channel Information for Object Detection [12.954675966833372]
Feature pyramid network (FPN) has been an effective framework to extract multi-scale features in object detection.
We present a novel channel enhancement network (CE-FPN) with three simple yet effective modules to alleviate these problems.
Our experiments show that CE-FPN achieves competitive performance compared to state-of-the-art FPN-based detectors on MS COCO benchmark.
arXiv Detail & Related papers (2021-03-19T05:51:53Z) - CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented
Object Detection in Remote Sensing Images [0.9462808515258465]
In this paper, we discuss the role of discriminative features in object detection.
We then propose a Critical Feature Capturing Network (CFC-Net) to improve detection accuracy.
We show that our method achieves superior detection performance compared with many state-of-the-art approaches.
arXiv Detail & Related papers (2021-01-18T02:31:09Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.