Re-Attention Transformer for Weakly Supervised Object Localization
- URL: http://arxiv.org/abs/2208.01838v1
- Date: Wed, 3 Aug 2022 04:34:28 GMT
- Title: Re-Attention Transformer for Weakly Supervised Object Localization
- Authors: Hui Su, Yue Ye, Zhiwei Chen, Mingli Song, Lechao Cheng
- Abstract summary: We present a re-attention mechanism termed token refinement transformer (TRT) that captures the object-level semantics to guide the localization well.
Specifically, TRT introduces a novel module named token priority scoring module (TPSM) to suppress the effects of background noise while focusing on the target object.
- Score: 45.417606565085116
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Weakly supervised object localization is a challenging task which aims to
localize objects with coarse annotations such as image categories. Existing
deep network approaches are mainly based on class activation map, which focuses
on highlighting discriminative local region while ignoring the full object. In
addition, the emerging transformer-based techniques constantly put a lot of
emphasis on the backdrop that impedes the ability to identify complete objects.
To address these issues, we present a re-attention mechanism termed token
refinement transformer (TRT) that captures the object-level semantics to guide
the localization well. Specifically, TRT introduces a novel module named token
priority scoring module (TPSM) to suppress the effects of background noise
while focusing on the target object. Then, we incorporate the class activation
map as the semantically aware input to restrain the attention map to the target
object. Extensive experiments on two benchmarks showcase the superiority of our
proposed method against existing methods with image category annotations.
Source code is available in
\url{https://github.com/su-hui-zz/ReAttentionTransformer}.
Related papers
- Improving Object Detection via Local-global Contrastive Learning [27.660633883387753]
We present a novel image-to-image translation method that specifically targets cross-domain object detection.
We learn to represent objects by contrasting local-global information.
This affords investigation of an under-explored challenge: obtaining performant detection, under domain shifts.
arXiv Detail & Related papers (2024-10-07T14:18:32Z) - Background Activation Suppression for Weakly Supervised Object
Localization and Semantic Segmentation [84.62067728093358]
Weakly supervised object localization and semantic segmentation aim to localize objects using only image-level labels.
New paradigm has emerged by generating a foreground prediction map to achieve pixel-level localization.
This paper presents two astonishing experimental observations on the object localization learning process.
arXiv Detail & Related papers (2023-09-22T15:44:10Z) - Semantic-Constraint Matching Transformer for Weakly Supervised Object
Localization [31.039698757869974]
Weakly supervised object localization (WSOL) strives to learn to localize objects with only image-level supervision.
Previous CNN-based methods suffer from partial activation issues, concentrating on the object's discriminative part instead of the entire entity scope.
We propose a novel Semantic-Constraint Matching Network (SCMN) via a transformer to converge on the divergent activation.
arXiv Detail & Related papers (2023-09-04T03:20:31Z) - Rethinking the Localization in Weakly Supervised Object Localization [51.29084037301646]
Weakly supervised object localization (WSOL) is one of the most popular and challenging tasks in computer vision.
Recent dividing WSOL into two parts (class-agnostic object localization and object classification) has become the state-of-the-art pipeline for this task.
We propose to replace SCR with a binary-class detector (BCD) for localizing multiple objects, where the detector is trained by discriminating the foreground and background.
arXiv Detail & Related papers (2023-08-11T14:38:51Z) - MOST: Multiple Object localization with Self-supervised Transformers for
object discovery [97.47075050779085]
We present Multiple Object localization with Self-supervised Transformers (MOST)
MOST uses features of transformers trained using self-supervised learning to localize multiple objects in real world images.
We show MOST can be used for self-supervised pre-training of object detectors, and yields consistent improvements on fully, semi-supervised object detection and unsupervised region proposal generation.
arXiv Detail & Related papers (2023-04-11T17:57:27Z) - Constrained Sampling for Class-Agnostic Weakly Supervised Object
Localization [10.542859578763068]
Self-supervised vision transformers can generate accurate localization maps of the objects in an image.
We propose leveraging the multiple maps generated by the different transformer heads to acquire pseudo-labels for training a weakly-supervised object localization model.
arXiv Detail & Related papers (2022-09-09T19:58:38Z) - ViTOL: Vision Transformer for Weakly Supervised Object Localization [0.735996217853436]
Weakly supervised object localization (WSOL) aims at predicting object locations in an image using only image-level category labels.
Common challenges that image classification models encounter when localizing objects are, (a) they tend to look at the most discriminative features in an image that confines the localization map to a very small region, (b) the localization maps are class agnostic, and the models highlight objects of multiple classes in the same image.
arXiv Detail & Related papers (2022-04-14T06:16:34Z) - Background-aware Classification Activation Map for Weakly Supervised
Object Localization [14.646874544729426]
We propose a background-aware classification activation map (B-CAM) to simultaneously learn localization scores of both object and background.
Our B-CAM can be trained in end-to-end manner based on a proposed stagger classification loss.
Experiments show that our B-CAM outperforms one-stage WSOL methods on the CUB-200, OpenImages and VOC2012 datasets.
arXiv Detail & Related papers (2021-12-29T03:12:09Z) - TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised
Object Localization [112.46381729542658]
Weakly supervised object localization (WSOL) is a challenging problem when given image category labels.
We introduce the token semantic coupled attention map (TS-CAM) to take full advantage of the self-attention mechanism in visual transformer for long-range dependency extraction.
arXiv Detail & Related papers (2021-03-27T09:43:16Z) - Weakly-Supervised Semantic Segmentation via Sub-category Exploration [73.03956876752868]
We propose a simple yet effective approach to enforce the network to pay attention to other parts of an object.
Specifically, we perform clustering on image features to generate pseudo sub-categories labels within each annotated parent class.
We conduct extensive analysis to validate the proposed method and show that our approach performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2020-08-03T20:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.