Uncovering the Connections Between Adversarial Transferability and
Knowledge Transferability
- URL: http://arxiv.org/abs/2006.14512v4
- Date: Thu, 8 Jul 2021 19:17:09 GMT
- Title: Uncovering the Connections Between Adversarial Transferability and
Knowledge Transferability
- Authors: Kaizhao Liang, Jacky Y. Zhang, Boxin Wang, Zhuolin Yang, Oluwasanmi
Koyejo, Bo Li
- Abstract summary: We analyze and demonstrate the connections between knowledge transferability and adversarial transferability.
Our theoretical studies show that adversarial transferability indicates knowledge transferability and vice versa.
We conduct extensive experiments for different scenarios on diverse datasets, showing a positive correlation between adversarial transferability and knowledge transferability.
- Score: 27.65302656389911
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Knowledge transferability, or transfer learning, has been widely adopted to
allow a pre-trained model in the source domain to be effectively adapted to
downstream tasks in the target domain. It is thus important to explore and
understand the factors affecting knowledge transferability. In this paper, as
the first work, we analyze and demonstrate the connections between knowledge
transferability and another important phenomenon--adversarial transferability,
\emph{i.e.}, adversarial examples generated against one model can be
transferred to attack other models. Our theoretical studies show that
adversarial transferability indicates knowledge transferability and vice versa.
Moreover, based on the theoretical insights, we propose two practical
adversarial transferability metrics to characterize this process, serving as
bidirectional indicators between adversarial and knowledge transferability. We
conduct extensive experiments for different scenarios on diverse datasets,
showing a positive correlation between adversarial transferability and
knowledge transferability. Our findings will shed light on future research
about effective knowledge transfer learning and adversarial transferability
analyses.
Related papers
- Risk of Transfer Learning and its Applications in Finance [2.966069495345018]
We propose a novel concept of transfer risk and analyze its properties to evaluate transferability of transfer learning.
Numerical results demonstrate a strong correlation between transfer risk and overall transfer learning performance.
arXiv Detail & Related papers (2023-11-06T17:23:54Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - Transfer Learning for Portfolio Optimization [4.031388559887924]
We introduce a novel concept called "transfer risk", within the optimization framework of transfer learning.
A series of numerical experiments are conducted from three categories: cross-continent transfer, cross-sector transfer, and cross-frequency transfer.
arXiv Detail & Related papers (2023-07-25T14:48:54Z) - Why Does Little Robustness Help? Understanding and Improving Adversarial
Transferability from Surrogate Training [24.376314203167016]
Adversarial examples (AEs) for DNNs have been shown to be transferable.
In this paper, we take a further step towards understanding adversarial transferability.
arXiv Detail & Related papers (2023-07-15T19:20:49Z) - Common Knowledge Learning for Generating Transferable Adversarial
Examples [60.1287733223249]
This paper focuses on an important type of black-box attacks, where the adversary generates adversarial examples by a substitute (source) model.
Existing methods tend to give unsatisfactory adversarial transferability when the source and target models are from different types of DNN architectures.
We propose a common knowledge learning (CKL) framework to learn better network weights to generate adversarial examples.
arXiv Detail & Related papers (2023-07-01T09:07:12Z) - Visualizing Transferred Knowledge: An Interpretive Model of Unsupervised
Domain Adaptation [70.85686267987744]
Unsupervised domain adaptation problems can transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose an interpretive model of unsupervised domain adaptation, as the first attempt to visually unveil the mystery of transferred knowledge.
Our method provides an intuitive explanation for the base model's predictions and unveils transfer knowledge by matching the image patches with the same semantics across both source and target domains.
arXiv Detail & Related papers (2023-03-04T03:02:12Z) - Phase Transitions in Transfer Learning for High-Dimensional Perceptrons [12.614901374282868]
Transfer learning seeks to improve the generalization performance of a target task by exploiting knowledge learned from a related source task.
The latter question is related to the so-called negative transfer phenomenon, where the transferred source information actually reduces the generalization performance of the target task.
We present a theoretical analysis of transfer learning by studying a pair of related perceptron learning tasks.
arXiv Detail & Related papers (2021-01-06T08:29:22Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - Adversarial Training Reduces Information and Improves Transferability [81.59364510580738]
Recent results show that features of adversarially trained networks for classification, in addition to being robust, enable desirable properties such as invertibility.
We show that the Adversarial Training can improve linear transferability to new tasks, from which arises a new trade-off between transferability of representations and accuracy on the source task.
arXiv Detail & Related papers (2020-07-22T08:30:16Z) - Inter- and Intra-domain Knowledge Transfer for Related Tasks in Deep
Character Recognition [2.320417845168326]
Pre-training a deep neural network on the ImageNet dataset is a common practice for training deep learning models.
The technique of pre-training on one task and then retraining on a new one is called transfer learning.
In this paper we analyse the effectiveness of using deep transfer learning for character recognition tasks.
arXiv Detail & Related papers (2020-01-02T14:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.