Model-Agnostic Graph Regularization for Few-Shot Learning
- URL: http://arxiv.org/abs/2102.07077v1
- Date: Sun, 14 Feb 2021 05:28:13 GMT
- Title: Model-Agnostic Graph Regularization for Few-Shot Learning
- Authors: Ethan Shen, Maria Brbic, Nicholas Monath, Jiaqi Zhai, Manzil Zaheer,
Jure Leskovec
- Abstract summary: We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
- Score: 60.64531995451357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many domains, relationships between categories are encoded in the
knowledge graph. Recently, promising results have been achieved by
incorporating knowledge graph as side information in hard classification tasks
with severely limited data. However, prior models consist of highly complex
architectures with many sub-components that all seem to impact performance. In
this paper, we present a comprehensive empirical study on graph embedded
few-shot learning. We introduce a graph regularization approach that allows a
deeper understanding of the impact of incorporating graph information between
labels. Our proposed regularization is widely applicable and model-agnostic,
and boosts the performance of any few-shot learning model, including
fine-tuning, metric-based, and optimization-based meta-learning. Our approach
improves the performance of strong base learners by up to 2% on Mini-ImageNet
and 6.7% on ImageNet-FS, outperforming state-of-the-art graph embedded methods.
Additional analyses reveal that graph regularizing models result in a lower
loss for more difficult tasks, such as those with fewer shots and less
informative support examples.
Related papers
- Core Knowledge Learning Framework for Graph Adaptation and Scalability Learning [7.239264041183283]
Graph classification faces several hurdles, including adapting to diverse prediction tasks, training across multiple target domains, and handling small-sample prediction scenarios.
By incorporating insights from various types of tasks, our method aims to enhance adaptability, scalability, and generalizability in graph classification.
Experimental results demonstrate significant performance enhancements achieved by our method compared to state-of-the-art approaches.
arXiv Detail & Related papers (2024-07-02T02:16:43Z) - Imbalanced Graph Classification with Multi-scale Oversampling Graph Neural Networks [25.12261412297796]
We introduce a novel multi-scale oversampling graph neural network (MOSGNN) that learns expressive minority graph representations.
It achieves this by jointly optimizing subgraph-level, graph-level, and pairwise-graph learning tasks.
Experiments on 16 imbalanced graph datasets show that MOSGNN i) significantly outperforms five state-of-the-art models.
arXiv Detail & Related papers (2024-05-08T09:16:54Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Sub-Graph Learning for Spatiotemporal Forecasting via Knowledge
Distillation [22.434970343698676]
We present a new framework called KD-SGL to effectively learn the sub-graphs.
We define one global model to learn the overall structure of the graph and multiple local models for each sub-graph.
arXiv Detail & Related papers (2022-11-17T18:02:55Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - InfoGCL: Information-Aware Graph Contrastive Learning [26.683911257080304]
We study how graph information is transformed and transferred during the contrastive learning process.
We propose an information-aware graph contrastive learning framework called InfoGCL.
We show for the first time that all recent graph contrastive learning methods can be unified by our framework.
arXiv Detail & Related papers (2021-10-28T21:10:39Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Weakly-supervised Graph Meta-learning for Few-shot Node Classification [53.36828125138149]
We propose a new graph meta-learning framework -- Graph Hallucination Networks (Meta-GHN)
Based on a new robustness-enhanced episodic training, Meta-GHN is meta-learned to hallucinate clean node representations from weakly-labeled data.
Extensive experiments demonstrate the superiority of Meta-GHN over existing graph meta-learning studies.
arXiv Detail & Related papers (2021-06-12T22:22:10Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.