DropPos: Pre-Training Vision Transformers by Reconstructing Dropped
Positions
- URL: http://arxiv.org/abs/2309.03576v2
- Date: Fri, 22 Sep 2023 00:54:47 GMT
- Title: DropPos: Pre-Training Vision Transformers by Reconstructing Dropped
Positions
- Authors: Haochen Wang, Junsong Fan, Yuxi Wang, Kaiyou Song, Tong Wang,
Zhaoxiang Zhang
- Abstract summary: We present DropPos, a novel pretext task designed to reconstruct Dropped Positions.
The code is publicly available at https://github.com/Haochen-Wang409/DropPos.
- Score: 63.61970125369834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As it is empirically observed that Vision Transformers (ViTs) are quite
insensitive to the order of input tokens, the need for an appropriate
self-supervised pretext task that enhances the location awareness of ViTs is
becoming evident. To address this, we present DropPos, a novel pretext task
designed to reconstruct Dropped Positions. The formulation of DropPos is
simple: we first drop a large random subset of positional embeddings and then
the model classifies the actual position for each non-overlapping patch among
all possible positions solely based on their visual appearance. To avoid
trivial solutions, we increase the difficulty of this task by keeping only a
subset of patches visible. Additionally, considering there may be different
patches with similar visual appearances, we propose position smoothing and
attentive reconstruction strategies to relax this classification problem, since
it is not necessary to reconstruct their exact positions in these cases.
Empirical evaluations of DropPos show strong capabilities. DropPos outperforms
supervised pre-training and achieves competitive results compared with
state-of-the-art self-supervised alternatives on a wide range of downstream
benchmarks. This suggests that explicitly encouraging spatial reasoning
abilities, as DropPos does, indeed contributes to the improved location
awareness of ViTs. The code is publicly available at
https://github.com/Haochen-Wang409/DropPos.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.