An end-to-end approach for the verification problem: learning the right
distance
- URL: http://arxiv.org/abs/2002.09469v4
- Date: Fri, 14 Aug 2020 16:20:28 GMT
- Title: An end-to-end approach for the verification problem: learning the right
distance
- Authors: Joao Monteiro, Isabela Albuquerque, Jahangir Alam, R Devon Hjelm,
Tiago Falk
- Abstract summary: We augment the metric learning setting by introducing a parametric pseudo-distance, trained jointly with the encoder.
We first show it approximates a likelihood ratio which can be used for hypothesis tests.
We observe training is much simplified under the proposed approach compared to metric learning with actual distances.
- Score: 15.553424028461885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this contribution, we augment the metric learning setting by introducing a
parametric pseudo-distance, trained jointly with the encoder. Several
interpretations are thus drawn for the learned distance-like model's output. We
first show it approximates a likelihood ratio which can be used for hypothesis
tests, and that it further induces a large divergence across the joint
distributions of pairs of examples from the same and from different classes.
Evaluation is performed under the verification setting consisting of
determining whether sets of examples belong to the same class, even if such
classes are novel and were never presented to the model during training.
Empirical evaluation shows such method defines an end-to-end approach for the
verification problem, able to attain better performance than simple scorers
such as those based on cosine similarity and further outperforming widely used
downstream classifiers. We further observe training is much simplified under
the proposed approach compared to metric learning with actual distances,
requiring no complex scheme to harvest pairs of examples.
Related papers
- Rethinking Distance Metrics for Counterfactual Explainability [53.436414009687]
We investigate a framing for counterfactual generation methods that considers counterfactuals not as independent draws from a region around the reference, but as jointly sampled with the reference from the underlying data distribution.
We derive a distance metric, tailored for counterfactual similarity that can be applied to a broad range of settings.
arXiv Detail & Related papers (2024-10-18T15:06:50Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Exploring new ways: Enforcing representational dissimilarity to learn
new features and reduce error consistency [1.7497479054352052]
We show that highly dissimilar intermediate representations result in less correlated output predictions and slightly lower error consistency.
With this, we shine first light on the connection between intermediate representations and their impact on the output predictions.
arXiv Detail & Related papers (2023-07-05T14:28:46Z) - Counting Like Human: Anthropoid Crowd Counting on Modeling the
Similarity of Objects [92.80955339180119]
mainstream crowd counting methods regress density map and integrate it to obtain counting results.
Inspired by this, we propose a rational and anthropoid crowd counting framework.
arXiv Detail & Related papers (2022-12-02T07:00:53Z) - A Maximum Log-Likelihood Method for Imbalanced Few-Shot Learning Tasks [3.2895195535353308]
We propose a new maximum log-likelihood metric for few-shot architectures.
We demonstrate that the proposed metric achieves superior performance accuracy w.r.t. conventional similarity metrics.
We also show that our algorithm achieves state-of-the-art transductive few-shot performance when the evaluation data is imbalanced.
arXiv Detail & Related papers (2022-11-26T21:31:00Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - An Empirical Comparison of Instance Attribution Methods for NLP [62.63504976810927]
We evaluate the degree to which different potential instance attribution agree with respect to the importance of training samples.
We find that simple retrieval methods yield training instances that differ from those identified via gradient-based methods.
arXiv Detail & Related papers (2021-04-09T01:03:17Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - On Contrastive Learning for Likelihood-free Inference [20.49671736540948]
Likelihood-free methods perform parameter inference in simulator models where evaluating the likelihood is intractable.
One class of methods for this likelihood-free problem uses a classifier to distinguish between pairs of parameter-observation samples.
Another popular class of methods fits a conditional distribution to the parameter posterior directly, and a particular recent variant allows for the use of flexible neural density estimators.
arXiv Detail & Related papers (2020-02-10T13:14:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.