A linearized framework and a new benchmark for model selection for
fine-tuning
- URL: http://arxiv.org/abs/2102.00084v1
- Date: Fri, 29 Jan 2021 21:57:15 GMT
- Title: A linearized framework and a new benchmark for model selection for
fine-tuning
- Authors: Aditya Deshpande, Alessandro Achille, Avinash Ravichandran, Hao Li,
Luca Zancato, Charless Fowlkes, Rahul Bhotika, Stefano Soatto, Pietro Perona
- Abstract summary: Fine-tuning from a collection of models pre-trained on different domains is emerging as a technique to improve test accuracy in the low-data regime.
We introduce two new baselines for model selection -- Label-Gradient and Label-Feature Correlation.
Our benchmark highlights accuracy gain with model zoo compared to fine-tuning Imagenet models.
- Score: 112.20527122513668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fine-tuning from a collection of models pre-trained on different domains (a
"model zoo") is emerging as a technique to improve test accuracy in the
low-data regime. However, model selection, i.e. how to pre-select the right
model to fine-tune from a model zoo without performing any training, remains an
open topic. We use a linearized framework to approximate fine-tuning, and
introduce two new baselines for model selection -- Label-Gradient and
Label-Feature Correlation. Since all model selection algorithms in the
literature have been tested on different use-cases and never compared directly,
we introduce a new comprehensive benchmark for model selection comprising of:
i) A model zoo of single and multi-domain models, and ii) Many target tasks.
Our benchmark highlights accuracy gain with model zoo compared to fine-tuning
Imagenet models. We show our model selection baseline can select optimal models
to fine-tune in few selections and has the highest ranking correlation to
fine-tuning accuracy compared to existing algorithms.
Related papers
- Stabilizing black-box model selection with the inflated argmax [8.52745154080651]
This paper presents a new approach to stabilizing model selection that leverages a combination of bagging and an "inflated" argmax operation.
Our method selects a small collection of models that all fit the data, and it is stable in that, with high probability, the removal of any training point will result in a collection of selected models that overlaps with the original collection.
In both settings, the proposed method yields stable and compact collections of selected models, outperforming a variety of benchmarks.
arXiv Detail & Related papers (2024-10-23T20:39:07Z) - All models are wrong, some are useful: Model Selection with Limited Labels [49.62984196182567]
We introduce MODEL SELECTOR, a framework for label-efficient selection of pretrained classifiers.
We show that MODEL SELECTOR drastically reduces the need for labeled data while consistently picking the best or near-best performing model.
Our results further highlight the robustness of MODEL SELECTOR in model selection, as it reduces the labeling cost by up to 72.41% when selecting a near-best model.
arXiv Detail & Related papers (2024-10-17T14:45:56Z) - Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild [84.57103623507082]
This paper introduces Model-GLUE, a holistic Large Language Models scaling guideline.
Our work starts with a benchmarking of existing LLM scaling techniques, especially selective merging, and variants of mixture.
Our methodology involves the clustering of mergeable models and optimal merging strategy selection, and the integration of clusters through a model mixture.
arXiv Detail & Related papers (2024-10-07T15:55:55Z) - Enabling Small Models for Zero-Shot Classification through Model Label Learning [50.68074833512999]
We introduce a novel paradigm, Model Label Learning (MLL), which bridges the gap between models and their functionalities.
Experiments on seven real-world datasets validate the effectiveness and efficiency of MLL.
arXiv Detail & Related papers (2024-08-21T09:08:26Z) - Towards Fundamentally Scalable Model Selection: Asymptotically Fast Update and Selection [40.85209520973634]
An ideal model selection scheme should support two operations efficiently over a large pool of candidate models.
Previous solutions to model selection require high computational complexity for at least one of these two operations.
We present Standardized Embedder, an empirical realization of isolated model embedding.
arXiv Detail & Related papers (2024-06-11T17:57:49Z) - Model Selection with Model Zoo via Graph Learning [45.30615308692713]
We introduce TransferGraph, a novel framework that reformulates model selection as a graph learning problem.
We demonstrate TransferGraph's effectiveness in capturing essential model-dataset relationships, yielding up to a 32% improvement in correlation between predicted performance and the actual fine-tuning results compared to the state-of-the-art methods.
arXiv Detail & Related papers (2024-04-05T09:50:00Z) - A Two-Phase Recall-and-Select Framework for Fast Model Selection [13.385915962994806]
We propose a two-phase (coarse-recall and fine-selection) model selection framework.
It aims to enhance the efficiency of selecting a robust model by leveraging the models' training performances on benchmark datasets.
It has been demonstrated that the proposed methodology facilitates the selection of a high-performing model at a rate about 3x times faster than conventional baseline methods.
arXiv Detail & Related papers (2024-03-28T14:44:44Z) - Budgeted Online Model Selection and Fine-Tuning via Federated Learning [26.823435733330705]
Online model selection involves selecting a model from a set of candidate models 'on the fly' to perform prediction on a stream of data.
The choice of candidate models henceforth has a crucial impact on the performance.
The present paper proposes an online federated model selection framework where a group of learners (clients) interacts with a server with sufficient memory.
Using the proposed algorithm, clients and the server collaborate to fine-tune models to adapt them to a non-stationary environment.
arXiv Detail & Related papers (2024-01-19T04:02:49Z) - Anchor Points: Benchmarking Models with Much Fewer Examples [88.02417913161356]
In six popular language classification benchmarks, model confidence in the correct class on many pairs of points is strongly correlated across models.
We propose Anchor Point Selection, a technique to select small subsets of datasets that capture model behavior across the entire dataset.
Just several anchor points can be used to estimate model per-class predictions on all other points in a dataset with low mean absolute error.
arXiv Detail & Related papers (2023-09-14T17:45:51Z) - Knowledge is a Region in Weight Space for Fine-tuned Language Models [48.589822853418404]
We study how the weight space and the underlying loss landscape of different models are interconnected.
We show that language models that have been finetuned on the same dataset form a tight cluster in the weight space, while models finetuned on different datasets from the same underlying task form a looser cluster.
arXiv Detail & Related papers (2023-02-09T18:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.