VL-NMS: Breaking Proposal Bottlenecks in Two-Stage Visual-Language
Matching
- URL: http://arxiv.org/abs/2105.05636v1
- Date: Wed, 12 May 2021 13:05:25 GMT
- Title: VL-NMS: Breaking Proposal Bottlenecks in Two-Stage Visual-Language
Matching
- Authors: Wenbo Ma, Long Chen, Hanwang Zhang, Jian Shao, Yueting Zhuang, Jun
Xiao
- Abstract summary: The prevailing framework for matching multimodal inputs is based on a two-stage process.
We argue that these methods overlook an obvious emphmismatch between the roles of proposals in the two stages.
We propose VL-NMS, which is the first method to yield query-aware proposals at the first stage.
- Score: 75.71523183166799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prevailing framework for matching multimodal inputs is based on a
two-stage process: 1) detecting proposals with an object detector and 2)
matching text queries with proposals. Existing two-stage solutions mostly focus
on the matching step. In this paper, we argue that these methods overlook an
obvious \emph{mismatch} between the roles of proposals in the two stages: they
generate proposals solely based on the detection confidence (i.e.,
query-agnostic), hoping that the proposals contain all instances mentioned in
the text query (i.e., query-aware). Due to this mismatch, chances are that
proposals relevant to the text query are suppressed during the filtering
process, which in turn bounds the matching performance. To this end, we propose
VL-NMS, which is the first method to yield query-aware proposals at the first
stage. VL-NMS regards all mentioned instances as critical objects, and
introduces a lightweight module to predict a score for aligning each proposal
with a critical object. These scores can guide the NMS operation to filter out
proposals irrelevant to the text query, increasing the recall of critical
objects, resulting in a significantly improved matching performance. Since
VL-NMS is agnostic to the matching step, it can be easily integrated into any
state-of-the-art two-stage matching methods. We validate the effectiveness of
VL-NMS on two multimodal matching tasks, namely referring expression grounding
and image-text matching. Extensive ablation studies on several baselines and
benchmarks consistently demonstrate the superiority of VL-NMS.
Related papers
- Dual DETRs for Multi-Label Temporal Action Detection [46.05173000284639]
Temporal Action Detection (TAD) aims to identify the action boundaries and the corresponding category within untrimmed videos.
We propose a new Dual-level query-based TAD framework, namely DualDETR, to detect actions from both instance-level and boundary-level.
We evaluate DualDETR on three challenging multi-label TAD benchmarks.
arXiv Detail & Related papers (2024-03-31T11:43:39Z) - Temporal-aware Hierarchical Mask Classification for Video Semantic
Segmentation [62.275143240798236]
Video semantic segmentation dataset has limited categories per video.
Less than 10% of queries could be matched to receive meaningful gradient updates during VSS training.
Our method achieves state-of-the-art performance on the latest challenging VSS benchmark VSPW without bells and whistles.
arXiv Detail & Related papers (2023-09-14T20:31:06Z) - Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal
Action Localization [98.66318678030491]
Weakly-supervised temporal action localization aims to localize and recognize actions in untrimmed videos with only video-level category labels during training.
We propose a novel Proposal-based Multiple Instance Learning (P-MIL) framework that directly classifies the candidate proposals in both the training and testing stages.
arXiv Detail & Related papers (2023-05-29T02:48:04Z) - Arguments to Key Points Mapping with Prompt-based Learning [0.0]
We propose two approaches to the argument-to-keypoint mapping task.
The first approach is to incorporate prompt engineering for fine-tuning the pre-trained language models.
The second approach utilizes prompt-based learning in PLMs to generate intermediary texts.
arXiv Detail & Related papers (2022-11-28T01:48:29Z) - Context-aware Proposal Network for Temporal Action Detection [47.72048484299649]
This report presents our first place winning solution for temporal action detection task in CVPR-2022 AcitivityNet Challenge.
The task aims to localize temporal boundaries of action instances with specific classes in long untrimmed videos.
We argue that the generated proposals contain rich contextual information, which may benefits detection confidence prediction.
arXiv Detail & Related papers (2022-06-18T01:43:43Z) - Contrastive Proposal Extension with LSTM Network for Weakly Supervised
Object Detection [52.86681130880647]
Weakly supervised object detection (WSOD) has attracted more and more attention since it only uses image-level labels and can save huge annotation costs.
We propose a new method by comparing the initial proposals and the extension ones to optimize those initial proposals.
Experiments on PASCAL VOC 2007, VOC 2012 and MS-COCO datasets show that our method has achieved the state-of-the-art results.
arXiv Detail & Related papers (2021-10-14T16:31:57Z) - Natural Language Video Localization with Learnable Moment Proposals [40.91060659795612]
We propose a novel model termed LPNet (Learnable Proposal Network for NLVL) with a fixed set of learnable moment proposals.
In this paper, we demonstrate the effectiveness of LPNet over existing state-of-the-art methods.
arXiv Detail & Related papers (2021-09-22T12:18:58Z) - Ref-NMS: Breaking Proposal Bottlenecks in Two-Stage Referring Expression
Grounding [80.46288064284084]
Ref-NMS is the first method to yield expression-aware proposals at the first stage.
Ref-NMS regards all nouns in the expression as critical objects, and introduces a lightweight module to predict a score for aligning each box with a critical object.
Since Ref- NMS is agnostic to the grounding step, it can be easily integrated into any state-of-the-art two-stage method.
arXiv Detail & Related papers (2020-09-03T05:04:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.