Multimodal-Aware Weakly Supervised Metric Learning with Self-weighting
Triplet Loss
- URL: http://arxiv.org/abs/2102.02670v1
- Date: Wed, 3 Feb 2021 07:27:05 GMT
- Title: Multimodal-Aware Weakly Supervised Metric Learning with Self-weighting
Triplet Loss
- Authors: Huiyuan Deng, Xiangzhu Meng, Lin Feng
- Abstract summary: We propose a novel weakly supervised metric learning algorithm, named MultimoDal Aware weakly supervised Metric Learning (MDaML)
MDaML partitions the data space into several clusters and allocates the local cluster centers and weight for each sample.
Experiments conducted on 13 datasets validate the superiority of the proposed MDaML.
- Score: 2.010312620798609
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, we have witnessed a surge of interests in learning a
suitable distance metric from weakly supervised data. Most existing methods aim
to pull all the similar samples closer while push the dissimilar ones as far as
possible. However, when some classes of the dataset exhibit multimodal
distribution, these goals conflict and thus can hardly be concurrently
satisfied. Additionally, to ensure a valid metric, many methods require a
repeated eigenvalue decomposition process, which is expensive and numerically
unstable. Therefore, how to learn an appropriate distance metric from weakly
supervised data remains an open but challenging problem. To address this issue,
in this paper, we propose a novel weakly supervised metric learning algorithm,
named MultimoDal Aware weakly supervised Metric Learning (MDaML). MDaML
partitions the data space into several clusters and allocates the local cluster
centers and weight for each sample. Then, combining it with the weighted
triplet loss can further enhance the local separability, which encourages the
local dissimilar samples to keep a large distance from the local similar
samples. Meanwhile, MDaML casts the metric learning problem into an
unconstrained optimization on the SPD manifold, which can be efficiently solved
by Riemannian Conjugate Gradient Descent (RCGD). Extensive experiments
conducted on 13 datasets validate the superiority of the proposed MDaML.
Related papers
- Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Instance-Variant Loss with Gaussian RBF Kernel for 3D Cross-modal
Retriveal [52.41252219453429]
Existing methods treat all instances equally, applying the same penalty strength to instances with varying degrees of difficulty.
This can result in ambiguous convergence or local optima, severely compromising the separability of the feature space.
We propose an Instance-Variant loss to assign different penalty strengths to different instances, improving the space separability.
arXiv Detail & Related papers (2023-05-07T10:12:14Z) - DAS: Densely-Anchored Sampling for Deep Metric Learning [43.81322638018864]
We propose a Densely-Anchored Sampling (DAS) scheme that exploits the anchor's nearby embedding space to densely produce embeddings without data points.
Our method is effortlessly integrated into existing DML frameworks and improves them without bells and whistles.
arXiv Detail & Related papers (2022-07-30T02:07:46Z) - Adaptive neighborhood Metric learning [184.95321334661898]
We propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML)
ANML can be used to learn both the linear and deep embeddings.
The emphlog-exp mean function proposed in our method gives a new perspective to review the deep metric learning methods.
arXiv Detail & Related papers (2022-01-20T17:26:37Z) - Exploring Adversarial Robustness of Deep Metric Learning [25.12224002984514]
DML uses deep neural architectures to learn semantic embeddings of the input.
We tackle the primary challenge of the metric losses being dependent on the samples in a mini-batch.
Using experiments on three commonly-used DML datasets, we demonstrate 5-76 fold increases in adversarial accuracy.
arXiv Detail & Related papers (2021-02-14T23:18:12Z) - Multi-level Distance Regularization for Deep Metric Learning [20.178765779788492]
We propose a novel distance-based regularization method for deep metric learning called Multi-level Distance Regularization (MDR)
MDR explicitly disturbs a learning procedure by regularizing pairwise distances between embedding vectors into multiple levels.
By easily adopting our MDR, the previous approaches can be improved in performance and generalization ability.
arXiv Detail & Related papers (2021-02-08T14:16:07Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Deep Metric Learning Meets Deep Clustering: An Novel Unsupervised
Approach for Feature Embedding [32.8693763689033]
Unsupervised Deep Distance Metric Learning (UDML) aims to learn sample similarities in the embedding space from an unlabeled dataset.
Traditional UDML methods usually use the triplet loss or pairwise loss which requires the mining of positive and negative samples.
This is, however, challenging in an unsupervised setting as the label information is not available.
We propose a new UDML method that overcomes that challenge.
arXiv Detail & Related papers (2020-09-09T04:02:04Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.