LF Tracy: A Unified Single-Pipeline Approach for Salient Object
Detection in Light Field Cameras
- URL: http://arxiv.org/abs/2401.16712v1
- Date: Tue, 30 Jan 2024 03:17:02 GMT
- Title: LF Tracy: A Unified Single-Pipeline Approach for Salient Object
Detection in Light Field Cameras
- Authors: Fei Teng, Jiaming Zhang, Jiawei Liu, Kunyu Peng, Xina Cheng, Zhiyong
Li, Kailun Yang
- Abstract summary: We propose an efficient paradigm to adapt light field data to enhance Salient Object Detection (SOD)
By utilizing only 28.9M parameters, the model achieves a 10% increase in accuracy with 3M additional parameters compared to its backbone using RGB images and an 86% rise to its backbone using LF images.
- Score: 22.288764512594433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Leveraging the rich information extracted from light field (LF) cameras is
instrumental for dense prediction tasks. However, adapting light field data to
enhance Salient Object Detection (SOD) still follows the traditional RGB
methods and remains under-explored in the community. Previous approaches
predominantly employ a custom two-stream design to discover the implicit
angular feature within light field cameras, leading to significant information
isolation between different LF representations. In this study, we propose an
efficient paradigm (LF Tracy) to address this limitation. We eschew the
conventional specialized fusion and decoder architecture for a dual-stream
backbone in favor of a unified, single-pipeline approach. This comprises
firstly a simple yet effective data augmentation strategy called MixLD to
bridge the connection of spatial, depth, and implicit angular information under
different LF representations. A highly efficient information aggregation (IA)
module is then introduced to boost asymmetric feature-wise information fusion.
Owing to this innovative approach, our model surpasses the existing
state-of-the-art methods, particularly demonstrating a 23% improvement over
previous results on the latest large-scale PKU dataset. By utilizing only 28.9M
parameters, the model achieves a 10% increase in accuracy with 3M additional
parameters compared to its backbone using RGB images and an 86% rise to its
backbone using LF images. The source code will be made publicly available at
https://github.com/FeiBryantkit/LF-Tracy.
Related papers
- PGNeXt: High-Resolution Salient Object Detection via Pyramid Grafting Network [24.54269823691119]
We present an advanced study on more challenging high-resolution salient object detection (HRSOD) from both dataset and network framework perspectives.
To compensate for the lack of HRSOD dataset, we thoughtfully collect a large-scale high resolution salient object detection dataset, called UHRSD.
All the images are finely annotated in pixel-level, far exceeding previous low-resolution SOD datasets.
arXiv Detail & Related papers (2024-08-02T09:31:21Z) - Multi-Correlation Siamese Transformer Network with Dense Connection for
3D Single Object Tracking [14.47355191520578]
Point cloud-based 3D object tracking is an important task in autonomous driving.
It remains challenging to learn the correlation between the template and search branches effectively with the sparse LIDAR point cloud data.
We present a multi-correlation Siamese Transformer network that has multiple stages and carries out feature correlation at the end of each stage.
arXiv Detail & Related papers (2023-12-18T09:33:49Z) - Contrastive Transformer Learning with Proximity Data Generation for
Text-Based Person Search [60.626459715780605]
Given a descriptive text query, text-based person search aims to retrieve the best-matched target person from an image gallery.
Such a cross-modal retrieval task is quite challenging due to significant modality gap, fine-grained differences and insufficiency of annotated data.
In this paper, we propose a simple yet effective dual Transformer model for text-based person search.
arXiv Detail & Related papers (2023-11-15T16:26:49Z) - OAFuser: Towards Omni-Aperture Fusion for Light Field Semantic Segmentation [48.828453331724965]
We propose an Omni-Aperture Fusion model (OAFuser) to extract angular information from sub-aperture images to generate semantically consistent results.
The proposed OAFuser achieves state-of-the-art performance on four UrbanLF datasets in terms of all evaluation metrics.
arXiv Detail & Related papers (2023-07-28T14:43:27Z) - SufrinNet: Toward Sufficient Cross-View Interaction for Stereo Image
Enhancement in The Dark [119.01585302856103]
Low-light stereo image enhancement (LLSIE) is a relatively new task to enhance the quality of visually unpleasant stereo images captured in dark conditions.
Current methods clearly suffer from two shortages: 1) insufficient cross-view interaction; 2) lacking long-range dependency for intra-view learning.
We propose a novel LLSIE model, termed underlineSufficient Cunderlineross-View underlineInteraction Network (SufrinNet)
arXiv Detail & Related papers (2022-11-02T04:01:30Z) - MBDF-Net: Multi-Branch Deep Fusion Network for 3D Object Detection [17.295359521427073]
We propose a Multi-Branch Deep Fusion Network (MBDF-Net) for 3D object detection.
In the first stage, our multi-branch feature extraction network utilizes Adaptive Attention Fusion modules to produce cross-modal fusion features from single-modal semantic features.
In the second stage, we use a region of interest (RoI) -pooled fusion module to generate enhanced local features for refinement.
arXiv Detail & Related papers (2021-08-29T15:40:15Z) - EPMF: Efficient Perception-aware Multi-sensor Fusion for 3D Semantic Segmentation [62.210091681352914]
We study multi-sensor fusion for 3D semantic segmentation for many applications, such as autonomous driving and robotics.
In this work, we investigate a collaborative fusion scheme called perception-aware multi-sensor fusion (PMF)
We propose a two-stream network to extract features from the two modalities separately. The extracted features are fused by effective residual-based fusion modules.
arXiv Detail & Related papers (2021-06-21T10:47:26Z) - Dense Multiscale Feature Fusion Pyramid Networks for Object Detection in
UAV-Captured Images [0.09065034043031667]
We propose a novel method called Dense Multiscale Feature Fusion Pyramid Networks(DMFFPN), which is aimed at obtaining rich features as much as possible.
Specifically, the dense connection is designed to fully utilize the representation from the different convolutional layers.
Experiments on the drone-based datasets named VisDrone-DET suggest a competitive performance of our method.
arXiv Detail & Related papers (2020-12-19T10:05:31Z) - Adaptive Context-Aware Multi-Modal Network for Depth Completion [107.15344488719322]
We propose to adopt the graph propagation to capture the observed spatial contexts.
We then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively.
Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively.
Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks.
arXiv Detail & Related papers (2020-08-25T06:00:06Z) - Cross-layer Feature Pyramid Network for Salient Object Detection [102.20031050972429]
We propose a novel Cross-layer Feature Pyramid Network to improve the progressive fusion in salient object detection.
The distributed features per layer own both semantics and salient details from all other layers simultaneously, and suffer reduced loss of important information.
arXiv Detail & Related papers (2020-02-25T14:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.