Green Runner: A tool for efficient model selection from model
repositories
- URL: http://arxiv.org/abs/2305.16849v1
- Date: Fri, 26 May 2023 12:00:37 GMT
- Title: Green Runner: A tool for efficient model selection from model
repositories
- Authors: Jai Kannan, Scott Barnett, Anj Simmons, Taylan Selvi, Luis Cruz
- Abstract summary: GreenRunnerGPT is a novel tool for selecting deep learning models based on specific use cases.
It employs a large language model to suggest weights for quality indicators, optimizing resource utilization.
We demonstrate that GreenRunnerGPT is able to identify a model suited to a target use case without wasteful computations.
- Score: 3.0378875015087563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models have become essential in software engineering, enabling
intelligent features like image captioning and document generation. However,
their popularity raises concerns about environmental impact and inefficient
model selection. This paper introduces GreenRunnerGPT, a novel tool for
efficiently selecting deep learning models based on specific use cases. It
employs a large language model to suggest weights for quality indicators,
optimizing resource utilization. The tool utilizes a multi-armed bandit
framework to evaluate models against target datasets, considering tradeoffs. We
demonstrate that GreenRunnerGPT is able to identify a model suited to a target
use case without wasteful computations that would occur under a brute-force
approach to model selection.
Related papers
- Enabling Small Models for Zero-Shot Classification through Model Label Learning [50.68074833512999]
We introduce a novel paradigm, Model Label Learning (MLL), which bridges the gap between models and their functionalities.
Experiments on seven real-world datasets validate the effectiveness and efficiency of MLL.
arXiv Detail & Related papers (2024-08-21T09:08:26Z) - Apple Intelligence Foundation Language Models [109.60033785567484]
This report describes the model architecture, the data used to train the model, the training process, and the evaluation results.
We highlight our focus on Responsible AI and how the principles are applied throughout the model development.
arXiv Detail & Related papers (2024-07-29T18:38:49Z) - A Two-Phase Recall-and-Select Framework for Fast Model Selection [13.385915962994806]
We propose a two-phase (coarse-recall and fine-selection) model selection framework.
It aims to enhance the efficiency of selecting a robust model by leveraging the models' training performances on benchmark datasets.
It has been demonstrated that the proposed methodology facilitates the selection of a high-performing model at a rate about 3x times faster than conventional baseline methods.
arXiv Detail & Related papers (2024-03-28T14:44:44Z) - REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values [17.489279048199304]
REFRESH is a method to reselect features so that additional constraints that are desirable towards model performance can be achieved without having to train several new models.
REFRESH's underlying algorithm is a novel technique using SHAP values and correlation analysis that can approximate for the predictions of a model without having to train these models.
arXiv Detail & Related papers (2024-03-13T18:06:43Z) - Green Runner: A tool for efficient deep learning component selection [0.76146285961466]
We present toolname, a novel tool to automatically select and evaluate models based on the application scenario provided in natural language.
toolname features a resource-efficient experimentation engine that integrates constraints and trade-offs based on the problem into the model selection process.
arXiv Detail & Related papers (2024-01-29T00:15:50Z) - Budgeted Online Model Selection and Fine-Tuning via Federated Learning [26.823435733330705]
Online model selection involves selecting a model from a set of candidate models 'on the fly' to perform prediction on a stream of data.
The choice of candidate models henceforth has a crucial impact on the performance.
The present paper proposes an online federated model selection framework where a group of learners (clients) interacts with a server with sufficient memory.
Using the proposed algorithm, clients and the server collaborate to fine-tune models to adapt them to a non-stationary environment.
arXiv Detail & Related papers (2024-01-19T04:02:49Z) - Dual Student Networks for Data-Free Model Stealing [79.67498803845059]
Two main challenges are estimating gradients of the target model without access to its parameters, and generating a diverse set of training samples.
We propose a Dual Student method where two students are symmetrically trained in order to provide the generator a criterion to generate samples that the two students disagree on.
We show that our new optimization framework provides more accurate gradient estimation of the target model and better accuracies on benchmark classification datasets.
arXiv Detail & Related papers (2023-09-18T18:11:31Z) - Tryage: Real-time, intelligent Routing of User Prompts to Large Language
Models [1.0878040851637998]
With over 200, 000 models in the Hugging Face ecosystem, users grapple with selecting and optimizing models to suit multifaceted and data domains.
Here, we propose a context-aware routing system, Tryage, that leverages a language model router for optimal selection of expert models from a model library.
arXiv Detail & Related papers (2023-08-22T17:48:24Z) - Investigating Ensemble Methods for Model Robustness Improvement of Text
Classifiers [66.36045164286854]
We analyze a set of existing bias features and demonstrate there is no single model that works best for all the cases.
By choosing an appropriate bias model, we can obtain a better robustness result than baselines with a more sophisticated model design.
arXiv Detail & Related papers (2022-10-28T17:52:10Z) - A linearized framework and a new benchmark for model selection for
fine-tuning [112.20527122513668]
Fine-tuning from a collection of models pre-trained on different domains is emerging as a technique to improve test accuracy in the low-data regime.
We introduce two new baselines for model selection -- Label-Gradient and Label-Feature Correlation.
Our benchmark highlights accuracy gain with model zoo compared to fine-tuning Imagenet models.
arXiv Detail & Related papers (2021-01-29T21:57:15Z) - Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual
Model-Based Reinforcement Learning [109.74041512359476]
We study a number of design decisions for the predictive model in visual MBRL algorithms.
We find that a range of design decisions that are often considered crucial, such as the use of latent spaces, have little effect on task performance.
We show how this phenomenon is related to exploration and how some of the lower-scoring models on standard benchmarks will perform the same as the best-performing models when trained on the same training data.
arXiv Detail & Related papers (2020-12-08T18:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.