Video Salient Object Detection via Contrastive Features and Attention
Modules
- URL: http://arxiv.org/abs/2111.02368v1
- Date: Wed, 3 Nov 2021 17:40:32 GMT
- Title: Video Salient Object Detection via Contrastive Features and Attention
Modules
- Authors: Yi-Wen Chen, Xiaojie Jin, Xiaohui Shen, Ming-Hsuan Yang
- Abstract summary: We propose a network with attention modules to learn contrastive features for video salient object detection.
A co-attention formulation is utilized to combine the low-level and high-level features.
We show that the proposed method requires less computation, and performs favorably against the state-of-the-art approaches.
- Score: 106.33219760012048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video salient object detection aims to find the most visually distinctive
objects in a video. To explore the temporal dependencies, existing methods
usually resort to recurrent neural networks or optical flow. However, these
approaches require high computational cost, and tend to accumulate inaccuracies
over time. In this paper, we propose a network with attention modules to learn
contrastive features for video salient object detection without the high
computational temporal modeling techniques. We develop a non-local
self-attention scheme to capture the global information in the video frame. A
co-attention formulation is utilized to combine the low-level and high-level
features. We further apply the contrastive learning to improve the feature
representations, where foreground region pairs from the same video are pulled
together, and foreground-background region pairs are pushed away in the latent
space. The intra-frame contrastive loss helps separate the foreground and
background features, and the inter-frame contrastive loss improves the temporal
consistency. We conduct extensive experiments on several benchmark datasets for
video salient object detection and unsupervised video object segmentation, and
show that the proposed method requires less computation, and performs favorably
against the state-of-the-art approaches.
Related papers
- Weakly Supervised Video Anomaly Detection and Localization with Spatio-Temporal Prompts [57.01985221057047]
This paper introduces a novel method that learnstemporal prompt embeddings for weakly supervised video anomaly detection and localization (WSVADL) based on pre-trained vision-language models (VLMs)
Our method achieves state-of-theart performance on three public benchmarks for the WSVADL task.
arXiv Detail & Related papers (2024-08-12T03:31:29Z) - Patch Spatio-Temporal Relation Prediction for Video Anomaly Detection [19.643936110623653]
Video Anomaly Detection (VAD) aims to identify abnormalities within a specific context and timeframe.
Recent deep learning-based VAD models have shown promising results by generating high-resolution frames.
We propose a self-supervised learning approach for VAD through an inter-patch relationship prediction task.
arXiv Detail & Related papers (2024-03-28T03:07:16Z) - Self-supervised Video Object Segmentation with Distillation Learning of Deformable Attention [29.62044843067169]
Video object segmentation is a fundamental research problem in computer vision.
We propose a new method for self-supervised video object segmentation based on distillation learning of deformable attention.
arXiv Detail & Related papers (2024-01-25T04:39:48Z) - Implicit Motion Handling for Video Camouflaged Object Detection [60.98467179649398]
We propose a new video camouflaged object detection (VCOD) framework.
It can exploit both short-term and long-term temporal consistency to detect camouflaged objects from video frames.
arXiv Detail & Related papers (2022-03-14T17:55:41Z) - Spatial-Temporal Correlation and Topology Learning for Person
Re-Identification in Videos [78.45050529204701]
We propose a novel framework to pursue discriminative and robust representation by modeling cross-scale spatial-temporal correlation.
CTL utilizes a CNN backbone and a key-points estimator to extract semantic local features from human body.
It explores a context-reinforced topology to construct multi-scale graphs by considering both global contextual information and physical connections of human body.
arXiv Detail & Related papers (2021-04-15T14:32:12Z) - DS-Net: Dynamic Spatiotemporal Network for Video Salient Object
Detection [78.04869214450963]
We propose a novel dynamic temporal-temporal network (DSNet) for more effective fusion of temporal and spatial information.
We show that the proposed method achieves superior performance than state-of-the-art algorithms.
arXiv Detail & Related papers (2020-12-09T06:42:30Z) - Fast Video Salient Object Detection via Spatiotemporal Knowledge
Distillation [20.196945571479002]
We present a lightweight network tailored for video salient object detection.
Specifically, we combine a saliency guidance embedding structure and spatial knowledge distillation to refine the spatial features.
In the temporal aspect, we propose a temporal knowledge distillation strategy, which allows the network to learn the robust temporal features.
arXiv Detail & Related papers (2020-10-20T04:48:36Z) - Video Anomaly Detection Using Pre-Trained Deep Convolutional Neural Nets
and Context Mining [2.0646127669654835]
We show how to use pre-trained convolutional neural net models to perform feature extraction and context mining.
We derive contextual properties from the high-level features to further improve the performance of our video anomaly detection method.
arXiv Detail & Related papers (2020-10-06T00:26:14Z) - Spatio-Temporal Graph for Video Captioning with Knowledge Distillation [50.034189314258356]
We propose a graph model for video captioning that exploits object interactions in space and time.
Our model builds interpretable links and is able to provide explicit visual grounding.
To avoid correlations caused by the variable number of objects, we propose an object-aware knowledge distillation mechanism.
arXiv Detail & Related papers (2020-03-31T03:58:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.