Self-supervised Training Sample Difficulty Balancing for Local
Descriptor Learning
- URL: http://arxiv.org/abs/2303.06124v1
- Date: Fri, 10 Mar 2023 18:37:43 GMT
- Title: Self-supervised Training Sample Difficulty Balancing for Local
Descriptor Learning
- Authors: Jiahan Zhang and Dayong Tian
- Abstract summary: In the case of an imbalance between positive and negative samples, hard negative mining strategies have been shown to help models learn more subtle differences.
However, if too strict mining strategies are promoted in the dataset, there may be a risk of introducing false negative samples.
In this paper, we investigate how to trade off the difficulty of the mined samples in order to obtain and exploit high-quality negative samples.
- Score: 1.309716118537215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the case of an imbalance between positive and negative samples, hard
negative mining strategies have been shown to help models learn more subtle
differences between positive and negative samples, thus improving recognition
performance. However, if too strict mining strategies are promoted in the
dataset, there may be a risk of introducing false negative samples. Meanwhile,
the implementation of the mining strategy disrupts the difficulty distribution
of samples in the real dataset, which may cause the model to over-fit these
difficult samples. Therefore, in this paper, we investigate how to trade off
the difficulty of the mined samples in order to obtain and exploit high-quality
negative samples, and try to solve the problem in terms of both the loss
function and the training strategy. The proposed balance loss provides an
effective discriminant for the quality of negative samples by combining a
self-supervised approach to the loss function, and uses a dynamic gradient
modulation strategy to achieve finer gradient adjustment for samples of
different difficulties. The proposed annealing training strategy then
constrains the difficulty of the samples drawn from negative sample mining to
provide data sources with different difficulty distributions for the loss
function, and uses samples of decreasing difficulty to train the model.
Extensive experiments show that our new descriptors outperform previous
state-of-the-art descriptors for patch validation, matching, and retrieval
tasks.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.