Towards Model-informed Precision Dosing with Expert-in-the-loop Machine
Learning
- URL: http://arxiv.org/abs/2106.14384v2
- Date: Tue, 29 Jun 2021 03:11:03 GMT
- Title: Towards Model-informed Precision Dosing with Expert-in-the-loop Machine
Learning
- Authors: Yihuang Kang, Yi-Wen Chiu, Ming-Yen Lin, Fang-yi Su, Sheng-Tai Huang
- Abstract summary: We consider a ML framework that may accelerate model learning and improve its interpretability by incorporating human experts into the model learning loop.
We propose a novel human-in-the-loop ML framework aimed at dealing with learning problems that the cost of data annotation is high.
With an application to precision dosing, our experimental results show that the approach can learn interpretable rules from data and may potentially lower experts' workload.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning (ML) and its applications have been transforming our lives
but it is also creating issues related to the development of fair, accountable,
transparent, and ethical Artificial Intelligence. As the ML models are not
fully comprehensible yet, it is obvious that we still need humans to be part of
algorithmic decision-making processes. In this paper, we consider a ML
framework that may accelerate model learning and improve its interpretability
by incorporating human experts into the model learning loop. We propose a novel
human-in-the-loop ML framework aimed at dealing with learning problems that the
cost of data annotation is high and the lack of appropriate data to model the
association between the target tasks and the input features. With an
application to precision dosing, our experimental results show that the
approach can learn interpretable rules from data and may potentially lower
experts' workload by replacing data annotation with rule representation
editing. The approach may also help remove algorithmic bias by introducing
experts' feedback into the iterative model learning process.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - How to unlearn a learned Machine Learning model ? [0.0]
I will present an elegant algorithm for unlearning a machine learning model and visualize its abilities.
I will elucidate the underlying mathematical theory and establish specific metrics to evaluate both the unlearned model's performance on desired data and its level of ignorance regarding unwanted data.
arXiv Detail & Related papers (2024-10-13T17:38:09Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [49.043599241803825]
Iterative Contrastive Unlearning (ICU) framework consists of three core components.
A Knowledge Unlearning Induction module removes specific knowledge through an unlearning loss.
A Contrastive Learning Enhancement module to preserve the model's expressive capabilities against the pure unlearning goal.
And an Iterative Unlearning Refinement module that dynamically assess the unlearning extent on specific data pieces and make iterative update.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - An Information Theoretic Approach to Machine Unlearning [45.600917449314444]
Key challenge in unlearning is forgetting the necessary data in a timely manner, while preserving model performance.
In this work, we address the zero-shot unlearning scenario, whereby an unlearning algorithm must be able to remove data given only a trained model and the data to be forgotten.
We derive a simple but principled zero-shot unlearning method based on the geometry of the model.
arXiv Detail & Related papers (2024-02-02T13:33:30Z) - Unlearnable Algorithms for In-context Learning [36.895152458323764]
In this paper, we focus on efficient unlearning methods for the task adaptation phase of a pretrained large language model.
We observe that an LLM's ability to do in-context learning for task adaptation allows for efficient exact unlearning of task adaptation training data.
We propose a new holistic measure of unlearning cost which accounts for varying inference costs.
arXiv Detail & Related papers (2024-02-01T16:43:04Z) - AI Model Disgorgement: Methods and Choices [127.54319351058167]
We introduce a taxonomy of possible disgorgement methods that are applicable to modern machine learning systems.
We investigate the meaning of "removing the effects" of data in the trained model in a way that does not require retraining from scratch.
arXiv Detail & Related papers (2023-04-07T08:50:18Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Learning a Formula of Interpretability to Learn Interpretable Formulas [1.7616042687330642]
We show that an ML model of non-objective Proxies of Human Interpretability can be learned from human feedback.
We show this for evolutionary symbolic regression.
Our approach represents an important stepping stone for the design of next-generation interpretable (evolutionary) ML algorithms.
arXiv Detail & Related papers (2020-04-23T13:59:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.