GAM Changer: Editing Generalized Additive Models with Interactive
Visualization
- URL: http://arxiv.org/abs/2112.03245v1
- Date: Mon, 6 Dec 2021 18:51:49 GMT
- Title: GAM Changer: Editing Generalized Additive Models with Interactive
Visualization
- Authors: Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark Nunnally,
Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana
- Abstract summary: We present our work, GAM Changer, an open-source interactive system to help data scientists easily and responsibly edit their Generalized Additive Models (GAMs)
With novel visualization techniques, our tool puts interpretability into action -- empowering human users to analyze, validate, and align model behaviors with their knowledge and values.
- Score: 28.77745864749409
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent strides in interpretable machine learning (ML) research reveal that
models exploit undesirable patterns in the data to make predictions, which
potentially causes harms in deployment. However, it is unclear how we can fix
these models. We present our ongoing work, GAM Changer, an open-source
interactive system to help data scientists and domain experts easily and
responsibly edit their Generalized Additive Models (GAMs). With novel
visualization techniques, our tool puts interpretability into action --
empowering human users to analyze, validate, and align model behaviors with
their knowledge and values. Built using modern web technologies, our tool runs
locally in users' computational notebooks or web browsers without requiring
extra compute resources, lowering the barrier to creating more responsible ML
models. GAM Changer is available at https://interpret.ml/gam-changer.
Related papers
- GAMformer: In-Context Learning for Generalized Additive Models [53.08263343627232]
We introduce GAMformer, the first method to leverage in-context learning to estimate shape functions of a GAM in a single forward pass.
Our experiments show that GAMformer performs on par with other leading GAMs across various classification benchmarks.
arXiv Detail & Related papers (2024-10-06T17:28:20Z) - Automated Text Scoring in the Age of Generative AI for the GPU-poor [49.1574468325115]
We analyze the performance and efficiency of open-source, small-scale generative language models for automated text scoring.
Results show that GLMs can be fine-tuned to achieve adequate, though not state-of-the-art, performance.
arXiv Detail & Related papers (2024-07-02T01:17:01Z) - User Friendly and Adaptable Discriminative AI: Using the Lessons from
the Success of LLMs and Image Generation Models [0.6926105253992517]
We develop a new system architecture that enables users to work with discriminative models.
Our approach has implications on improving trust, user-friendliness, and adaptability of these versatile but traditional prediction models.
arXiv Detail & Related papers (2023-12-11T20:37:58Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z) - Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
Foundation Models [103.71308117592963]
We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning.
In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification.
arXiv Detail & Related papers (2022-11-27T21:43:45Z) - Interpretability, Then What? Editing Machine Learning Models to Reflect
Human Knowledge and Values [27.333641578187887]
We develop GAM Changer, the first interactive system to help data scientists and domain experts edit Generalized Additive Models (GAMs)
With novel interaction techniques, our tool puts interpretability into action--empowering users to analyze, validate, and align model behaviors with their knowledge and values.
arXiv Detail & Related papers (2022-06-30T17:57:12Z) - Visual Auditor: Interactive Visualization for Detection and
Summarization of Model Biases [18.434430375939755]
As machine learning (ML) systems become increasingly widespread, it is necessary to audit these systems for biases prior to their deployment.
Recent research has developed algorithms for effectively identifying intersectional bias in the form of interpretable, underperforming subsets (or slices) of the data.
We propose Visual Auditor, an interactive visualization tool for auditing and summarizing model biases.
arXiv Detail & Related papers (2022-06-25T02:48:27Z) - GAM(e) changer or not? An evaluation of interpretable machine learning
models based on additive model constraints [5.783415024516947]
This paper investigates a series of intrinsically interpretable machine learning models.
We evaluate the prediction qualities of five GAMs as compared to six traditional ML models.
arXiv Detail & Related papers (2022-04-19T20:37:31Z) - Towards Model-informed Precision Dosing with Expert-in-the-loop Machine
Learning [0.0]
We consider a ML framework that may accelerate model learning and improve its interpretability by incorporating human experts into the model learning loop.
We propose a novel human-in-the-loop ML framework aimed at dealing with learning problems that the cost of data annotation is high.
With an application to precision dosing, our experimental results show that the approach can learn interpretable rules from data and may potentially lower experts' workload.
arXiv Detail & Related papers (2021-06-28T03:45:09Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.