Abstract: Contrastive learning has been applied successfully to learn numerical vector
representations of various forms of data, such as texts and images. Learned
encoders exhibit versatile transfer capabilities to many downstream tasks.
Representation based search is highly efficient with state-of-the-art
performance. Previous researches demonstrated that learning high-quality
representations requires a large number of negatives in contrastive loss. In
practice, the technique of in-batch negative is used, where for each example in
a batch, other batch examples' positives will be taken as its negatives,
avoiding encoding extra negatives. This, however, still conditions each
example's loss on all batch examples and requires fitting the entire large
batch into GPU memory.
This paper introduces a re-computation technique that decouples back
propagation between contrastive loss and the encoder, removing encoder backward
pass data dependency along the batch dimension. As a result, gradients can be
computed for one subset of the batch at a time, leading to an almost constant
peak GPU memory usage for batches of different sizes.