Manipulating Predictions over Discrete Inputs in Machine Teaching
- URL: http://arxiv.org/abs/2401.17865v1
- Date: Wed, 31 Jan 2024 14:23:51 GMT
- Title: Manipulating Predictions over Discrete Inputs in Machine Teaching
- Authors: Xiaodong Wu, Yufei Han, Hayssam Dahrouj, Jianbing Ni, Zhenwen Liang,
Xiangliang Zhang
- Abstract summary: This paper focuses on machine teaching in the discrete domain, specifically on manipulating student models' predictions based on the goals of teachers via changing the training data efficiently.
We formulate this task as a optimization problem and solve it by proposing an iterative searching algorithm.
Our algorithm demonstrates significant numerical merit in the scenarios where a teacher attempts at correcting erroneous predictions to improve the student's models, or maliciously manipulating the model to misclassify some specific samples to the target class aligned with his personal profits.
- Score: 43.914943603238996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine teaching often involves the creation of an optimal (typically
minimal) dataset to help a model (referred to as the `student') achieve
specific goals given by a teacher. While abundant in the continuous domain, the
studies on the effectiveness of machine teaching in the discrete domain are
relatively limited. This paper focuses on machine teaching in the discrete
domain, specifically on manipulating student models' predictions based on the
goals of teachers via changing the training data efficiently. We formulate this
task as a combinatorial optimization problem and solve it by proposing an
iterative searching algorithm. Our algorithm demonstrates significant numerical
merit in the scenarios where a teacher attempts at correcting erroneous
predictions to improve the student's models, or maliciously manipulating the
model to misclassify some specific samples to the target class aligned with his
personal profits. Experimental results show that our proposed algorithm can
have superior performance in effectively and efficiently manipulating the
predictions of the model, surpassing conventional baselines.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Machine Unlearning for Causal Inference [0.6621714555125157]
It is important to enable the model to forget some of its learning/captured information about a given user (machine unlearning)
This paper introduces the concept of machine unlearning for causal inference, particularly propensity score matching and treatment effect estimation.
The dataset used in the study is the Lalonde dataset, a widely used dataset for evaluating the effectiveness of job training programs.
arXiv Detail & Related papers (2023-08-24T17:27:01Z) - Assessing the Generalizability of a Performance Predictive Model [0.6070952062639761]
We propose a workflow to estimate the generalizability of a predictive model for algorithm performance.
The results show that generalizability patterns in the landscape feature space are reflected in the performance space.
arXiv Detail & Related papers (2023-05-31T12:50:44Z) - Efficient Sub-structured Knowledge Distillation [52.5931565465661]
We propose an approach that is much simpler in its formulation and far more efficient for training than existing approaches.
We transfer the knowledge from a teacher model to its student model by locally matching their predictions on all sub-structures, instead of the whole output space.
arXiv Detail & Related papers (2022-03-09T15:56:49Z) - Graph-based Ensemble Machine Learning for Student Performance Prediction [0.7874708385247353]
We propose a graph-based ensemble machine learning method to improve the stability of single machine learning methods.
Our model outperforms the best traditional machine learning algorithms by up to 14.8% in prediction accuracy.
arXiv Detail & Related papers (2021-12-15T05:19:46Z) - Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation [55.34995029082051]
We propose a method to learn to augment for data-scarce domain BERT knowledge distillation.
We show that the proposed method significantly outperforms state-of-the-art baselines on four different tasks.
arXiv Detail & Related papers (2021-01-20T13:07:39Z) - Model-Agnostic Explanations using Minimal Forcing Subsets [11.420687735660097]
We propose a new model-agnostic algorithm to identify a minimal set of training samples that are indispensable for a given model's decision.
Our algorithm identifies such a set of "indispensable" samples iteratively by solving a constrained optimization problem.
Results show that our algorithm is an effective and easy-to-comprehend tool that helps to better understand local model behavior.
arXiv Detail & Related papers (2020-11-01T22:45:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.