Comparing Foundation Models using Data Kernels
- URL: http://arxiv.org/abs/2305.05126v3
- Date: Sun, 7 Jan 2024 15:28:56 GMT
- Title: Comparing Foundation Models using Data Kernels
- Authors: Brandon Duderstadt and Hayden S. Helm and Carey E. Priebe
- Abstract summary: We present a methodology for directly comparing the embedding space geometry of foundation models.
Our methodology is grounded in random graph theory and enables valid hypothesis testing of embedding similarity.
We show how our framework can induce a manifold of models equipped with a distance function that correlates strongly with several downstream metrics.
- Score: 13.099029073152257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in self-supervised learning and neural network scaling have
enabled the creation of large models, known as foundation models, which can be
easily adapted to a wide range of downstream tasks. The current paradigm for
comparing foundation models involves evaluating them with aggregate metrics on
various benchmark datasets. This method of model comparison is heavily
dependent on the chosen evaluation metric, which makes it unsuitable for
situations where the ideal metric is either not obvious or unavailable. In this
work, we present a methodology for directly comparing the embedding space
geometry of foundation models, which facilitates model comparison without the
need for an explicit evaluation metric. Our methodology is grounded in random
graph theory and enables valid hypothesis testing of embedding similarity on a
per-datum basis. Further, we demonstrate how our methodology can be extended to
facilitate population level model comparison. In particular, we show how our
framework can induce a manifold of models equipped with a distance function
that correlates strongly with several downstream metrics. We remark on the
utility of this population level model comparison as a first step towards a
taxonomic science of foundation models.
Related papers
- Exploring Model Kinship for Merging Large Language Models [52.01652098827454]
We introduce model kinship, the degree of similarity or relatedness between Large Language Models.
We find that there is a certain relationship between model kinship and the performance gains after model merging.
We propose a new model merging strategy: Top-k Greedy Merging with Model Kinship, which can yield better performance on benchmark datasets.
arXiv Detail & Related papers (2024-10-16T14:29:29Z) - Embedding-based statistical inference on generative models [10.948308354932639]
We extend results related to embedding-based representations of generative models to classical statistical inference settings.
We demonstrate that using the perspective space as the basis of a notion of "similar" is effective for multiple model-level inference tasks.
arXiv Detail & Related papers (2024-10-01T22:28:39Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Investigating Ensemble Methods for Model Robustness Improvement of Text
Classifiers [66.36045164286854]
We analyze a set of existing bias features and demonstrate there is no single model that works best for all the cases.
By choosing an appropriate bias model, we can obtain a better robustness result than baselines with a more sophisticated model design.
arXiv Detail & Related papers (2022-10-28T17:52:10Z) - Model Comparison in Approximate Bayesian Computation [0.456877715768796]
A common problem in natural sciences is the comparison of competing models in the light of observed data.
This framework relies on the calculation of likelihood functions which are intractable for most models used in practice.
I propose a new efficient method to perform Bayesian model comparison in ABC.
arXiv Detail & Related papers (2022-03-15T10:24:16Z) - A moment-matching metric for latent variable generative models [0.0]
In scope of Goodhart's law, when a metric becomes a target it ceases to be a good metric.
We propose a new metric for model comparison or regularization that relies on moments.
It is common to draw samples from the fitted distribution when evaluating latent variable models.
arXiv Detail & Related papers (2021-10-04T17:51:08Z) - Distributional Depth-Based Estimation of Object Articulation Models [21.046351215949525]
We propose a method that efficiently learns distributions over articulation model parameters directly from depth images.
Our core contributions include a novel representation for distributions over rigid body transformations.
We introduce a novel deep learning based approach, DUST-net, that performs category-independent articulation model estimation.
arXiv Detail & Related papers (2021-08-12T17:44:51Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.