User Friendly and Adaptable Discriminative AI: Using the Lessons from
the Success of LLMs and Image Generation Models
- URL: http://arxiv.org/abs/2312.06826v1
- Date: Mon, 11 Dec 2023 20:37:58 GMT
- Title: User Friendly and Adaptable Discriminative AI: Using the Lessons from
the Success of LLMs and Image Generation Models
- Authors: Son The Nguyen, Theja Tulabandhula, Mary Beth Watson-Manheim
- Abstract summary: We develop a new system architecture that enables users to work with discriminative models.
Our approach has implications on improving trust, user-friendliness, and adaptability of these versatile but traditional prediction models.
- Score: 0.6926105253992517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While there is significant interest in using generative AI tools as
general-purpose models for specific ML applications, discriminative models are
much more widely deployed currently. One of the key shortcomings of these
discriminative AI tools that have been already deployed is that they are not
adaptable and user-friendly compared to generative AI tools (e.g., GPT4, Stable
Diffusion, Bard, etc.), where a non-expert user can iteratively refine model
inputs and give real-time feedback that can be accounted for immediately,
allowing users to build trust from the start. Inspired by this emerging
collaborative workflow, we develop a new system architecture that enables users
to work with discriminative models (such as for object detection, sentiment
classification, etc.) in a fashion similar to generative AI tools, where they
can easily provide immediate feedback as well as adapt the deployed models as
desired. Our approach has implications on improving trust, user-friendliness,
and adaptability of these versatile but traditional prediction models.
Related papers
- GenAgent: Build Collaborative AI Systems with Automated Workflow Generation -- Case Studies on ComfyUI [64.57616646552869]
This paper explores collaborative AI systems that use to enhance performance to integrate models, data sources, and pipelines to solve complex and diverse tasks.
We introduce GenAgent, an LLM-based framework that automatically generates complex, offering greater flexibility and scalability compared to monolithic models.
The results demonstrate that GenAgent outperforms baseline approaches in both run-level and task-level evaluations.
arXiv Detail & Related papers (2024-09-02T17:44:10Z) - ModelGPT: Unleashing LLM's Capabilities for Tailored Model Generation [35.160964210941955]
We propose ModelGPT, a framework designed to determine and generate AI models tailored to the data or task descriptions provided by the user.
Given user requirements, ModelGPT is able to provide tailored models at most 270x faster than the previous paradigms.
arXiv Detail & Related papers (2024-02-18T11:24:34Z) - Scaling Laws Do Not Scale [54.72120385955072]
Recent work has argued that as the size of a dataset increases, the performance of a model trained on that dataset will increase.
We argue that this scaling law relationship depends on metrics used to measure performance that may not correspond with how different groups of people perceive the quality of models' output.
Different communities may also have values in tension with each other, leading to difficult, potentially irreconcilable choices about metrics used for model evaluations.
arXiv Detail & Related papers (2023-07-05T15:32:21Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
Foundation Models [103.71308117592963]
We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning.
In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification.
arXiv Detail & Related papers (2022-11-27T21:43:45Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - GAM Changer: Editing Generalized Additive Models with Interactive
Visualization [28.77745864749409]
We present our work, GAM Changer, an open-source interactive system to help data scientists easily and responsibly edit their Generalized Additive Models (GAMs)
With novel visualization techniques, our tool puts interpretability into action -- empowering human users to analyze, validate, and align model behaviors with their knowledge and values.
arXiv Detail & Related papers (2021-12-06T18:51:49Z) - Evaluating CLIP: Towards Characterization of Broader Capabilities and
Downstream Implications [8.15254368157658]
We analyze CLIP and highlight some of the challenges such models pose.
We find that CLIP can inherit biases found in prior computer vision systems.
These results add evidence to the growing body of work calling for a change in the notion of a 'better' model.
arXiv Detail & Related papers (2021-08-05T19:05:57Z) - Model Learning with Personalized Interpretability Estimation (ML-PIE) [2.862606936691229]
High-stakes applications require AI-generated models to be interpretable.
Current algorithms for the synthesis of potentially interpretable models rely on objectives or regularization terms.
We propose an approach for the synthesis of models that are tailored to the user.
arXiv Detail & Related papers (2021-04-13T09:47:48Z) - Sim-Env: Decoupling OpenAI Gym Environments from Simulation Models [0.0]
Reinforcement learning (RL) is one of the most active fields of AI research.
Development methodology still lags behind, with a severe lack of standard APIs to foster the development of RL applications.
We present a workflow and tools for the decoupled development and maintenance of multi-purpose agent-based models and derived single-purpose reinforcement learning environments.
arXiv Detail & Related papers (2021-02-19T09:25:21Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.