Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources
- URL: http://arxiv.org/abs/2007.08714v2
- Date: Wed, 29 Jul 2020 12:12:30 GMT
- Title: Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources
- Authors: Yun-Yun Tsai and Pin-Yu Chen and Tsung-Yi Ho
- Abstract summary: We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
- Score: 78.72922528736011
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current transfer learning methods are mainly based on finetuning a pretrained
model with target-domain data. Motivated by the techniques from adversarial
machine learning (ML) that are capable of manipulating the model prediction via
data perturbations, in this paper we propose a novel approach, black-box
adversarial reprogramming (BAR), that repurposes a well-trained black-box ML
model (e.g., a prediction API or a proprietary software) for solving different
ML tasks, especially in the scenario with scarce data and constrained
resources. The rationale lies in exploiting high-performance but unknown ML
models to gain learning capability for transfer learning. Using zeroth order
optimization and multi-label mapping techniques, BAR can reprogram a black-box
ML model solely based on its input-output responses without knowing the model
architecture or changing any parameter. More importantly, in the limited
medical data setting, on autism spectrum disorder classification, diabetic
retinopathy detection, and melanoma detection tasks, BAR outperforms
state-of-the-art methods and yields comparable performance to the vanilla
adversarial reprogramming method requiring complete knowledge of the target ML
model. BAR also outperforms baseline transfer learning approaches by a
significant margin, demonstrating cost-effective means and new insights for
transfer learning.
Related papers
- Verbalized Machine Learning: Revisiting Machine Learning with Language Models [63.10391314749408]
We introduce the framework of verbalized machine learning (VML)
VML constrains the parameter space to be human-interpretable natural language.
We conduct several studies to empirically evaluate the effectiveness of VML.
arXiv Detail & Related papers (2024-06-06T17:59:56Z) - Machine Vision Therapy: Multimodal Large Language Models Can Enhance Visual Robustness via Denoising In-Context Learning [67.0609518552321]
We propose to conduct Machine Vision Therapy which aims to rectify the noisy predictions from vision models.
By fine-tuning with the denoised labels, the learning model performance can be boosted in an unsupervised manner.
arXiv Detail & Related papers (2023-12-05T07:29:14Z) - Unlearn What You Want to Forget: Efficient Unlearning for LLMs [92.51670143929056]
Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data.
This process might suffer from privacy issues and violations of data protection regulations.
We propose an efficient unlearning framework that could efficiently update LLMs without having to retrain the whole model after data removals.
arXiv Detail & Related papers (2023-10-31T03:35:59Z) - Learn to Unlearn: A Survey on Machine Unlearning [29.077334665555316]
This article presents a review of recent machine unlearning techniques, verification mechanisms, and potential attacks.
We highlight emerging challenges and prospective research directions.
We aim for this paper to provide valuable resources for integrating privacy, equity, andresilience into ML systems.
arXiv Detail & Related papers (2023-05-12T14:28:02Z) - Model Sparsity Can Simplify Machine Unlearning [33.18951938708467]
In response to recent data regulation requirements, machine unlearning (MU) has emerged as a critical process.
Our study introduces a novel model-based perspective: model sparsification via weight pruning.
We show in both theory and practice that model sparsity can boost the multi-criteria unlearning performance of an approximate unlearner.
arXiv Detail & Related papers (2023-04-11T02:12:02Z) - Masked Autoencoding for Scalable and Generalizable Decision Making [93.84855114717062]
MaskDP is a simple and scalable self-supervised pretraining method for reinforcement learning and behavioral cloning.
We find that a MaskDP model gains the capability of zero-shot transfer to new BC tasks, such as single and multiple goal reaching.
arXiv Detail & Related papers (2022-11-23T07:04:41Z) - Distillation to Enhance the Portability of Risk Models Across
Institutions with Large Patient Claims Database [12.452703677540505]
We investigate the practicality of model portability through a cross-site evaluation of readmission prediction models.
We apply a recurrent neural network, augmented with self-attention and blended with expert features, to build readmission prediction models.
Our experiments show that direct application of ML models trained at one institution and tested at another institution perform worse than models trained and tested at the same institution.
arXiv Detail & Related papers (2022-07-06T05:26:32Z) - Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language
Transfer Learning [59.38343286807997]
We propose Model-Agnostic Multitask Fine-tuning (MAMF) for vision-language models on unseen tasks.
Compared with model-agnostic meta-learning (MAML), MAMF discards the bi-level optimization and uses only first-order gradients.
We show that MAMF consistently outperforms the classical fine-tuning method for few-shot transfer learning on five benchmark datasets.
arXiv Detail & Related papers (2022-03-09T17:26:53Z) - Towards Model-informed Precision Dosing with Expert-in-the-loop Machine
Learning [0.0]
We consider a ML framework that may accelerate model learning and improve its interpretability by incorporating human experts into the model learning loop.
We propose a novel human-in-the-loop ML framework aimed at dealing with learning problems that the cost of data annotation is high.
With an application to precision dosing, our experimental results show that the approach can learn interpretable rules from data and may potentially lower experts' workload.
arXiv Detail & Related papers (2021-06-28T03:45:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.