Explanation-Guided Training for Cross-Domain Few-Shot Classification
- URL: http://arxiv.org/abs/2007.08790v2
- Date: Wed, 9 Dec 2020 09:53:24 GMT
- Title: Explanation-Guided Training for Cross-Domain Few-Shot Classification
- Authors: Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Yunqing Zhao,
Ngai-Man Cheung, Alexander Binder
- Abstract summary: Cross-domain few-shot classification task (CD-FSC) combines few-shot classification with the requirement to generalize across domains represented by datasets.
We introduce a novel training approach for existing FSC models.
We show that explanation-guided training effectively improves the model generalization.
- Score: 96.12873073444091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-domain few-shot classification task (CD-FSC) combines few-shot
classification with the requirement to generalize across domains represented by
datasets. This setup faces challenges originating from the limited labeled data
in each class and, additionally, from the domain shift between training and
test sets. In this paper, we introduce a novel training approach for existing
FSC models. It leverages on the explanation scores, obtained from existing
explanation methods when applied to the predictions of FSC models, computed for
intermediate feature maps of the models. Firstly, we tailor the layer-wise
relevance propagation (LRP) method to explain the predictions of FSC models.
Secondly, we develop a model-agnostic explanation-guided training strategy that
dynamically finds and emphasizes the features which are important for the
predictions. Our contribution does not target a novel explanation method but
lies in a novel application of explanations for the training phase. We show
that explanation-guided training effectively improves the model generalization.
We observe improved accuracy for three different FSC models: RelationNet, cross
attention network, and a graph neural network-based formulation, on five
few-shot learning datasets: miniImagenet, CUB, Cars, Places, and Plantae. The
source code is available at https://github.com/SunJiamei/few-shot-lrp-guided
Related papers
- High-Performance Few-Shot Segmentation with Foundation Models: An Empirical Study [64.06777376676513]
We develop a few-shot segmentation (FSS) framework based on foundation models.
To be specific, we propose a simple approach to extract implicit knowledge from foundation models to construct coarse correspondence.
Experiments on two widely used datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-10T08:04:11Z) - Self-Supervised Contrastive Graph Clustering Network via Structural Information Fusion [15.293684479404092]
We propose a novel deep graph clustering method called CGCN.
Our approach introduces contrastive signals and deep structural information into the pre-training process.
Our method has been experimentally validated on multiple real-world graph datasets.
arXiv Detail & Related papers (2024-08-08T09:49:26Z) - Rethinking Few-shot 3D Point Cloud Semantic Segmentation [62.80639841429669]
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS)
We focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution.
To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built.
arXiv Detail & Related papers (2024-03-01T15:14:47Z) - Boosting Low-Data Instance Segmentation by Unsupervised Pre-training
with Saliency Prompt [103.58323875748427]
This work offers a novel unsupervised pre-training solution for low-data regimes.
Inspired by the recent success of the Prompting technique, we introduce a new pre-training method that boosts QEIS models.
Experimental results show that our method significantly boosts several QEIS models on three datasets.
arXiv Detail & Related papers (2023-02-02T15:49:03Z) - Harnessing the Power of Explanations for Incremental Training: A
LIME-Based Approach [6.244905619201076]
In this work, model explanations are fed back to the feed-forward training to help the model generalize better.
The framework incorporates the custom weighted loss with Elastic Weight Consolidation (EWC) to maintain performance in sequential testing sets.
The proposed custom training procedure results in a consistent enhancement of accuracy ranging from 0.5% to 1.5% throughout all phases of the incremental learning setup.
arXiv Detail & Related papers (2022-11-02T18:16:17Z) - Learning What Not to Segment: A New Perspective on Few-Shot Segmentation [63.910211095033596]
Recently few-shot segmentation (FSS) has been extensively developed.
This paper proposes a fresh and straightforward insight to alleviate the problem.
In light of the unique nature of the proposed approach, we also extend it to a more realistic but challenging setting.
arXiv Detail & Related papers (2022-03-15T03:08:27Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Interpretable Time-series Classification on Few-shot Samples [27.05851877375113]
This paper proposes an interpretable neural-based framework, namely textitDual Prototypical Shapelet Networks (DPSN) for few-shot time-series classification.
DPSN interprets the model from dual granularity: 1) global overview using representative time series samples, and 2) local highlights using discriminative shapelets.
We have derived 18 few-shot TSC datasets from public benchmark datasets and evaluated the proposed method by comparing with baselines.
arXiv Detail & Related papers (2020-06-03T03:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.