Yet Meta Learning Can Adapt Fast, It Can Also Break Easily
- URL: http://arxiv.org/abs/2009.01672v1
- Date: Wed, 2 Sep 2020 15:03:14 GMT
- Title: Yet Meta Learning Can Adapt Fast, It Can Also Break Easily
- Authors: Han Xu, Yaxin Li, Xiaorui Liu, Hui Liu, Jiliang Tang
- Abstract summary: We study adversarial attacks on meta learning under the few-shot classification problem.
We propose the first attacking algorithm against meta learning under various settings.
- Score: 53.65787902272109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta learning algorithms have been widely applied in many tasks for efficient
learning, such as few-shot image classification and fast reinforcement
learning. During meta training, the meta learner develops a common learning
strategy, or experience, from a variety of learning tasks. Therefore, during
meta test, the meta learner can use the learned strategy to quickly adapt to
new tasks even with a few training samples. However, there is still a dark side
about meta learning in terms of reliability and robustness. In particular, is
meta learning vulnerable to adversarial attacks? In other words, would a
well-trained meta learner utilize its learned experience to build wrong or
likely useless knowledge, if an adversary unnoticeably manipulates the given
training set? Without the understanding of this problem, it is extremely risky
to apply meta learning in safety-critical applications. Thus, in this paper, we
perform the initial study about adversarial attacks on meta learning under the
few-shot classification problem. In particular, we formally define key elements
of adversarial attacks unique to meta learning and propose the first attacking
algorithm against meta learning under various settings. We evaluate the
effectiveness of the proposed attacking strategy as well as the robustness of
several representative meta learning algorithms. Experimental results
demonstrate that the proposed attacking strategy can easily break the meta
learner and meta learning is vulnerable to adversarial attacks. The
implementation of the proposed framework will be released upon the acceptance
of this paper.
Related papers
- Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation
and Complexity Analysis [20.11993437283895]
This paper provides a game-theoretical underpinning for understanding this type of security risk.
We define the sampling attack model as a Stackelberg game between the attacker and the agent, which yields a minimax formulation.
We observe that a minor effort of the attacker can significantly deteriorate the learning performance.
arXiv Detail & Related papers (2022-07-29T21:29:29Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Meta-Meta Classification for One-Shot Learning [11.27833234287093]
We present a new approach, called meta-meta classification, to learning in small-data settings.
In this approach, one uses a large set of learning problems to design an ensemble of learners, where each learner has high bias and low variance.
We evaluate the approach on a one-shot, one-class-versus-all classification task and show that it is able to outperform traditional meta-learning as well as ensembling approaches.
arXiv Detail & Related papers (2020-04-17T07:05:03Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Meta-Learning across Meta-Tasks for Few-Shot Learning [107.44950540552765]
We argue that the inter-meta-task relationships should be exploited and those tasks are sampled strategically to assist in meta-learning.
We consider the relationships defined over two types of meta-task pairs and propose different strategies to exploit them.
arXiv Detail & Related papers (2020-02-11T09:25:13Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.