Fewer is More: A Deep Graph Metric Learning Perspective Using Fewer
Proxies
- URL: http://arxiv.org/abs/2010.13636v1
- Date: Mon, 26 Oct 2020 14:52:42 GMT
- Title: Fewer is More: A Deep Graph Metric Learning Perspective Using Fewer
Proxies
- Authors: Yuehua Zhu, Muli Yang, Cheng Deng, and Wei Liu
- Abstract summary: We propose a Proxy-based deep Graph Metric Learning approach from the perspective of graph classification.
Multiple global proxies are leveraged to collectively approximate the original data points for each class.
We design a novel reverse label propagation algorithm, by which the neighbor relationships are adjusted according to ground-truth labels.
- Score: 65.92826041406802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep metric learning plays a key role in various machine learning tasks. Most
of the previous works have been confined to sampling from a mini-batch, which
cannot precisely characterize the global geometry of the embedding space.
Although researchers have developed proxy- and classification-based methods to
tackle the sampling issue, those methods inevitably incur a redundant
computational cost. In this paper, we propose a novel Proxy-based deep Graph
Metric Learning (ProxyGML) approach from the perspective of graph
classification, which uses fewer proxies yet achieves better comprehensive
performance. Specifically, multiple global proxies are leveraged to
collectively approximate the original data points for each class. To
efficiently capture local neighbor relationships, a small number of such
proxies are adaptively selected to construct similarity subgraphs between these
proxies and each data point. Further, we design a novel reverse label
propagation algorithm, by which the neighbor relationships are adjusted
according to ground-truth labels, so that a discriminative metric space can be
learned during the process of subgraph classification. Extensive experiments
carried out on widely-used CUB-200-2011, Cars196, and Stanford Online Products
datasets demonstrate the superiority of the proposed ProxyGML over the
state-of-the-art methods in terms of both effectiveness and efficiency. The
source code is publicly available at https://github.com/YuehuaZhu/ProxyGML.
Related papers
- Federated Graph Learning with Structure Proxy Alignment [43.13100155569234]
Federated Graph Learning (FGL) aims to learn graph learning models over graph data distributed in multiple data owners.
We propose FedSpray, a novel FGL framework that learns local class-wise structure proxies in the latent space.
Our goal is to obtain the aligned structure proxies that can serve as reliable, unbiased neighboring information for node classification.
arXiv Detail & Related papers (2024-08-18T07:32:54Z) - FastGAS: Fast Graph-based Annotation Selection for In-Context Learning [53.17606395275021]
In-context learning (ICL) empowers large language models (LLMs) to tackle new tasks by using a series of training instances as prompts.
Existing methods have proposed to select a subset of unlabeled examples for annotation.
We propose a graph-based selection method, FastGAS, designed to efficiently identify high-quality instances.
arXiv Detail & Related papers (2024-06-06T04:05:54Z) - $\mathcal{G}^2Pxy$: Generative Open-Set Node Classification on Graphs
with Proxy Unknowns [35.976426549671075]
We propose a novel generative open-set node classification method, i.e. $mathcalG2Pxy$.
It follows a stricter inductive learning setting where no information about unknown classes is available during training and validation.
$mathcalG2Pxy$ achieves superior effectiveness for unknown class detection and known class classification.
arXiv Detail & Related papers (2023-08-10T09:42:20Z) - Robust Calibrate Proxy Loss for Deep Metric Learning [6.784952050036532]
We propose a Calibrate Proxy structure, which uses the real sample information to improve the similarity calculation in proxy-based loss.
We show that our approach can effectively improve the performance of commonly used proxy-based losses on both regular and noisy datasets.
arXiv Detail & Related papers (2023-04-06T02:43:10Z) - Global Proxy-based Hard Mining for Visual Place Recognition [3.6739949215165164]
We introduce a new technique that performs global hard mini-batch sampling based on proxies.
To do so, we add a new end-to-end trainable branch to the network, which generates efficient place descriptors.
Our method can be used in combination with all existing pairwise and triplet loss functions with negligible additional memory and computation cost.
arXiv Detail & Related papers (2023-02-28T00:43:13Z) - ProxyMix: Proxy-based Mixup Training with Label Refinery for Source-Free
Domain Adaptation [73.14508297140652]
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose an effective method named Proxy-based Mixup training with label refinery ( ProxyMix)
Experiments on three 2D image and one 3D point cloud object recognition benchmarks demonstrate that ProxyMix yields state-of-the-art performance for source-free UDA tasks.
arXiv Detail & Related papers (2022-05-29T03:45:00Z) - Non-isotropy Regularization for Proxy-based Deep Metric Learning [78.18860829585182]
We propose non-isotropy regularization ($mathbbNIR$) for proxy-based Deep Metric Learning.
This allows us to explicitly induce a non-isotropic distribution of samples around a proxy to optimize for.
Experiments highlight consistent generalization benefits of $mathbbNIR$ while achieving competitive and state-of-the-art performance.
arXiv Detail & Related papers (2022-03-16T11:13:20Z) - Self-supervised Graph-level Representation Learning with Local and
Global Structure [71.45196938842608]
We propose a unified framework called Local-instance and Global-semantic Learning (GraphLoG) for self-supervised whole-graph representation learning.
Besides preserving the local similarities, GraphLoG introduces the hierarchical prototypes to capture the global semantic clusters.
An efficient online expectation-maximization (EM) algorithm is further developed for learning the model.
arXiv Detail & Related papers (2021-06-08T05:25:38Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - Proxy Network for Few Shot Learning [9.529264466445236]
We propose a few-shot learning algorithm called proxy network under the architecture of meta-learning.
We conduct experiments on CUB and mini-ImageNet datasets in 1-shot-5-way and 5-shot-5-way scenarios.
arXiv Detail & Related papers (2020-09-09T13:28:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.