Adaptive neighborhood Metric learning
- URL: http://arxiv.org/abs/2201.08314v1
- Date: Thu, 20 Jan 2022 17:26:37 GMT
- Title: Adaptive neighborhood Metric learning
- Authors: Kun Song, Junwei Han, Gong Cheng, Jiwen Lu, Feiping Nie
- Abstract summary: We propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML)
ANML can be used to learn both the linear and deep embeddings.
The emphlog-exp mean function proposed in our method gives a new perspective to review the deep metric learning methods.
- Score: 184.95321334661898
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we reveal that metric learning would suffer from serious
inseparable problem if without informative sample mining. Since the inseparable
samples are often mixed with hard samples, current informative sample mining
strategies used to deal with inseparable problem may bring up some
side-effects, such as instability of objective function, etc. To alleviate this
problem, we propose a novel distance metric learning algorithm, named adaptive
neighborhood metric learning (ANML). In ANML, we design two thresholds to
adaptively identify the inseparable similar and dissimilar samples in the
training procedure, thus inseparable sample removing and metric parameter
learning are implemented in the same procedure. Due to the non-continuity of
the proposed ANML, we develop an ingenious function, named \emph{log-exp mean
function} to construct a continuous formulation to surrogate it, which can be
efficiently solved by the gradient descent method. Similar to Triplet loss,
ANML can be used to learn both the linear and deep embeddings. By analyzing the
proposed method, we find it has some interesting properties. For example, when
ANML is used to learn the linear embedding, current famous metric learning
algorithms such as the large margin nearest neighbor (LMNN) and neighbourhood
components analysis (NCA) are the special cases of the proposed ANML by setting
the parameters different values. When it is used to learn deep features, the
state-of-the-art deep metric learning algorithms such as Triplet loss, Lifted
structure loss, and Multi-similarity loss become the special cases of ANML.
Furthermore, the \emph{log-exp mean function} proposed in our method gives a
new perspective to review the deep metric learning methods such as Prox-NCA and
N-pairs loss. At last, promising experimental results demonstrate the
effectiveness of the proposed method.
Related papers
- Robust Analysis of Multi-Task Learning Efficiency: New Benchmarks on Light-Weighed Backbones and Effective Measurement of Multi-Task Learning Challenges by Feature Disentanglement [69.51496713076253]
In this paper, we focus on the aforementioned efficiency aspects of existing MTL methods.
We first carry out large-scale experiments of the methods with smaller backbones and on a the MetaGraspNet dataset as a new test ground.
We also propose Feature Disentanglement measure as a novel and efficient identifier of the challenges in MTL.
arXiv Detail & Related papers (2024-02-05T22:15:55Z) - Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - On Training Implicit Meta-Learning With Applications to Inductive
Weighing in Consistency Regularization [0.0]
Implicit meta-learning (IML) requires computing $2nd$ order gradients, particularly the Hessian.
Various approximations for the Hessian were proposed but a systematic comparison of their compute cost, stability, generalization of solution found and estimation accuracy were largely overlooked.
We show how training a "Confidence Network" to extract domain specific features can learn to up-weigh useful images and down-weigh out-of-distribution samples.
arXiv Detail & Related papers (2023-10-28T15:50:03Z) - An Adaptive Plug-and-Play Network for Few-Shot Learning [12.023266104119289]
Few-shot learning requires a model to classify new samples after learning from only a few samples.
Deep networks and complex metrics tend to induce overfitting, making it difficult to further improve the performance.
We propose plug-and-play model-adaptive resizer (MAR) and adaptive similarity metric (ASM) without any other losses.
arXiv Detail & Related papers (2023-02-18T13:25:04Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - Exploring Adversarial Robustness of Deep Metric Learning [25.12224002984514]
DML uses deep neural architectures to learn semantic embeddings of the input.
We tackle the primary challenge of the metric losses being dependent on the samples in a mini-batch.
Using experiments on three commonly-used DML datasets, we demonstrate 5-76 fold increases in adversarial accuracy.
arXiv Detail & Related papers (2021-02-14T23:18:12Z) - A Rigorous Machine Learning Analysis Pipeline for Biomedical Binary
Classification: Application in Pancreatic Cancer Nested Case-control Studies
with Implications for Bias Assessments [2.9726886415710276]
We have laid out and assembled a complete, rigorous ML analysis pipeline focused on binary classification.
This 'automated' but customizable pipeline includes a) exploratory analysis, b) data cleaning and transformation, c) feature selection, d) model training with 9 established ML algorithms.
We apply this pipeline to an epidemiological investigation of established and newly identified risk factors for cancer to evaluate how different sources of bias might be handled by ML algorithms.
arXiv Detail & Related papers (2020-08-28T19:58:05Z) - ECML: An Ensemble Cascade Metric Learning Mechanism towards Face
Verification [50.137924223702264]
In particular, hierarchical metric learning is executed in the cascade way to alleviate underfitting.
Considering the feature distribution characteristics of faces, a robust Mahalanobis metric learning method (RMML) with closed-form solution is additionally proposed.
EC-RMML is superior to state-of-the-art metric learning methods for face verification.
arXiv Detail & Related papers (2020-07-11T08:47:07Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.