Likelihood Variance as Text Importance for Resampling Texts to Map Language Models
- URL: http://arxiv.org/abs/2505.15428v1
- Date: Wed, 21 May 2025 12:10:40 GMT
- Title: Likelihood Variance as Text Importance for Resampling Texts to Map Language Models
- Authors: Momose Oyama, Ryo Kishino, Hiroaki Yamagiwa, Hidetoshi Shimodaira,
- Abstract summary: We propose a resampling method that selects important texts with weights proportional to the variance of log-likelihoods across models for each text.<n>Our method significantly reduces the number of required texts while preserving the accuracy of KL divergence estimates.
- Score: 2.5999037208435705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the computational cost of constructing a model map, which embeds diverse language models into a common space for comparison via KL divergence. The map relies on log-likelihoods over a large text set, making the cost proportional to the number of texts. To reduce this cost, we propose a resampling method that selects important texts with weights proportional to the variance of log-likelihoods across models for each text. Our method significantly reduces the number of required texts while preserving the accuracy of KL divergence estimates. Experiments show that it achieves comparable performance to uniform sampling with about half as many texts, and also facilitates efficient incorporation of new models into an existing map. These results enable scalable and efficient construction of language model maps.
Related papers
- Fast and Accurate Contextual Knowledge Extraction Using Cascading Language Model Chains and Candidate Answers [0.0]
We propose, implement, and apply the Language Model Chain (LMC) algorithm.<n>In this, a language model's response to a given prompt is only correct if it exists in the collection of possible answers.<n>We used the LMC algorithm to extract patient dates of birth from medical documents.
arXiv Detail & Related papers (2025-07-21T14:31:16Z) - Mapping 1,000+ Language Models via the Log-Likelihood Vector [2.5999037208435705]
We use log-likelihood vectors computed on a predefined text set as model features to compare autoregressive language models at scale.<n>Our method is highly scalable, with computational cost growing linearly in both the number of models and text samples.<n>Applying this method to over 1,000 language models, we constructed a "model map," providing a new perspective on large-scale model analysis.
arXiv Detail & Related papers (2025-02-22T10:23:36Z) - Model-diff: A Tool for Comparative Study of Language Models in the Input Space [34.680890752084004]
We propose a new model comparative analysis setting that considers a large input space where brute-force enumeration would be infeasible.<n>Experiments reveal for the first time the quantitative prediction differences between LMs in a large input space, potentially facilitating the model analysis for applications such as model plagiarism.
arXiv Detail & Related papers (2024-12-13T00:06:25Z) - Split and Rephrase with Large Language Models [2.499907423888049]
Split and Rephrase (SPRP) task consists in splitting complex sentences into a sequence of shorter grammatical sentences.
We evaluate large language models on the task, showing that they can provide large improvements over the state of the art on the main metrics.
arXiv Detail & Related papers (2023-12-18T10:16:37Z) - Concept-Guided Chain-of-Thought Prompting for Pairwise Comparison Scoring of Texts with Large Language Models [3.656114607436271]
Existing text scoring methods require a large corpus, struggle with short texts, or require hand-labeled data.<n>We develop a text scoring framework that leverages generative large language models (LLMs)<n>We apply this approach to better understand speech reflecting aversion to specific political parties on Twitter.
arXiv Detail & Related papers (2023-10-18T15:34:37Z) - Contextual Distortion Reveals Constituency: Masked Language Models are
Implicit Parsers [7.558415495951758]
We propose a novel method for extracting parse trees from masked language models (LMs)
Our method computes a score for each span based on the distortion of contextual representations resulting from linguistic perturbations.
Our method consistently outperforms previous state-of-the-art methods on English with masked LMs, and also demonstrates superior performance in a multilingual setting.
arXiv Detail & Related papers (2023-06-01T13:10:48Z) - Tailoring Language Generation Models under Total Variation Distance [55.89964205594829]
The standard paradigm of neural language generation adopts maximum likelihood estimation (MLE) as the optimizing method.
We develop practical bounds to apply it to language generation.
We introduce the TaiLr objective that balances the tradeoff of estimating TVD.
arXiv Detail & Related papers (2023-02-26T16:32:52Z) - Ensemble Transfer Learning for Multilingual Coreference Resolution [60.409789753164944]
A problem that frequently occurs when working with a non-English language is the scarcity of annotated training data.
We design a simple but effective ensemble-based framework that combines various transfer learning techniques.
We also propose a low-cost TL method that bootstraps coreference resolution models by utilizing Wikipedia anchor texts.
arXiv Detail & Related papers (2023-01-22T18:22:55Z) - Beyond Contrastive Learning: A Variational Generative Model for
Multilingual Retrieval [109.62363167257664]
We propose a generative model for learning multilingual text embeddings.
Our model operates on parallel data in $N$ languages.
We evaluate this method on a suite of tasks including semantic similarity, bitext mining, and cross-lingual question retrieval.
arXiv Detail & Related papers (2022-12-21T02:41:40Z) - Classifiers are Better Experts for Controllable Text Generation [63.17266060165098]
We show that the proposed method significantly outperforms recent PPLM, GeDi, and DExperts on PPL and sentiment accuracy based on the external classifier of generated texts.
The same time, it is also easier to implement and tune, and has significantly fewer restrictions and requirements.
arXiv Detail & Related papers (2022-05-15T12:58:35Z) - Fine-tuning GPT-3 for Russian Text Summarization [77.34726150561087]
This paper showcases ruGPT3 ability to summarize texts, fine-tuning it on the corpora of Russian news with their corresponding human-generated summaries.
We evaluate the resulting texts with a set of metrics, showing that our solution can surpass the state-of-the-art model's performance without additional changes in architecture or loss function.
arXiv Detail & Related papers (2021-08-07T19:01:40Z) - Parameter Space Factorization for Zero-Shot Learning across Tasks and
Languages [112.65994041398481]
We propose a Bayesian generative model for the space of neural parameters.
We infer the posteriors over such latent variables based on data from seen task-language combinations.
Our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods.
arXiv Detail & Related papers (2020-01-30T16:58:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.