Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources
- URL: http://arxiv.org/abs/2007.08714v2
- Date: Wed, 29 Jul 2020 12:12:30 GMT
- Title: Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources
- Authors: Yun-Yun Tsai and Pin-Yu Chen and Tsung-Yi Ho
- Abstract summary: We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
- Score: 78.72922528736011
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current transfer learning methods are mainly based on finetuning a pretrained
model with target-domain data. Motivated by the techniques from adversarial
machine learning (ML) that are capable of manipulating the model prediction via
data perturbations, in this paper we propose a novel approach, black-box
adversarial reprogramming (BAR), that repurposes a well-trained black-box ML
model (e.g., a prediction API or a proprietary software) for solving different
ML tasks, especially in the scenario with scarce data and constrained
resources. The rationale lies in exploiting high-performance but unknown ML
models to gain learning capability for transfer learning. Using zeroth order
optimization and multi-label mapping techniques, BAR can reprogram a black-box
ML model solely based on its input-output responses without knowing the model
architecture or changing any parameter. More importantly, in the limited
medical data setting, on autism spectrum disorder classification, diabetic
retinopathy detection, and melanoma detection tasks, BAR outperforms
state-of-the-art methods and yields comparable performance to the vanilla
adversarial reprogramming method requiring complete knowledge of the target ML
model. BAR also outperforms baseline transfer learning approaches by a
significant margin, demonstrating cost-effective means and new insights for
transfer learning.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - How to unlearn a learned Machine Learning model ? [0.0]
I will present an elegant algorithm for unlearning a machine learning model and visualize its abilities.
I will elucidate the underlying mathematical theory and establish specific metrics to evaluate both the unlearned model's performance on desired data and its level of ignorance regarding unwanted data.
arXiv Detail & Related papers (2024-10-13T17:38:09Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [49.043599241803825]
Iterative Contrastive Unlearning (ICU) framework consists of three core components.
A Knowledge Unlearning Induction module removes specific knowledge through an unlearning loss.
A Contrastive Learning Enhancement module to preserve the model's expressive capabilities against the pure unlearning goal.
And an Iterative Unlearning Refinement module that dynamically assess the unlearning extent on specific data pieces and make iterative update.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Verbalized Machine Learning: Revisiting Machine Learning with Language Models [63.10391314749408]
We introduce the framework of verbalized machine learning (VML)
VML constrains the parameter space to be human-interpretable natural language.
We empirically verify the effectiveness of VML, and hope that VML can serve as a stepping stone to stronger interpretability.
arXiv Detail & Related papers (2024-06-06T17:59:56Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Learn to Unlearn: A Survey on Machine Unlearning [29.077334665555316]
This article presents a review of recent machine unlearning techniques, verification mechanisms, and potential attacks.
We highlight emerging challenges and prospective research directions.
We aim for this paper to provide valuable resources for integrating privacy, equity, andresilience into ML systems.
arXiv Detail & Related papers (2023-05-12T14:28:02Z) - Model Sparsity Can Simplify Machine Unlearning [33.18951938708467]
In response to recent data regulation requirements, machine unlearning (MU) has emerged as a critical process.
Our study introduces a novel model-based perspective: model sparsification via weight pruning.
We show in both theory and practice that model sparsity can boost the multi-criteria unlearning performance of an approximate unlearner.
arXiv Detail & Related papers (2023-04-11T02:12:02Z) - Distillation to Enhance the Portability of Risk Models Across
Institutions with Large Patient Claims Database [12.452703677540505]
We investigate the practicality of model portability through a cross-site evaluation of readmission prediction models.
We apply a recurrent neural network, augmented with self-attention and blended with expert features, to build readmission prediction models.
Our experiments show that direct application of ML models trained at one institution and tested at another institution perform worse than models trained and tested at the same institution.
arXiv Detail & Related papers (2022-07-06T05:26:32Z) - Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language
Transfer Learning [59.38343286807997]
We propose Model-Agnostic Multitask Fine-tuning (MAMF) for vision-language models on unseen tasks.
Compared with model-agnostic meta-learning (MAML), MAMF discards the bi-level optimization and uses only first-order gradients.
We show that MAMF consistently outperforms the classical fine-tuning method for few-shot transfer learning on five benchmark datasets.
arXiv Detail & Related papers (2022-03-09T17:26:53Z) - Towards Model-informed Precision Dosing with Expert-in-the-loop Machine
Learning [0.0]
We consider a ML framework that may accelerate model learning and improve its interpretability by incorporating human experts into the model learning loop.
We propose a novel human-in-the-loop ML framework aimed at dealing with learning problems that the cost of data annotation is high.
With an application to precision dosing, our experimental results show that the approach can learn interpretable rules from data and may potentially lower experts' workload.
arXiv Detail & Related papers (2021-06-28T03:45:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.