Model Whisper: Steering Vectors Unlock Large Language Models' Potential in Test-time
- URL: http://arxiv.org/abs/2512.04748v1
- Date: Thu, 04 Dec 2025 12:36:16 GMT
- Title: Model Whisper: Steering Vectors Unlock Large Language Models' Potential in Test-time
- Authors: Xinyue Kang, Diwei Shi, Li Chen,
- Abstract summary: We introduce a lightweight component, Test-Time Steering Vectors (TTSV), which is prepended to the input while keeping the model's parameters entirely frozen.<n>TTSV is both lightweight and highly efficient to optimize, making it a true plug-and-play enhancement.<n>Our approach exhibits robust generalization, with its steering vectors proving highly transferable across diverse tasks.
- Score: 6.741914038966904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is a critical challenge to efficiently unlock the powerful reasoning potential of Large Language Models (LLMs) for specific tasks or new distributions. Existing test-time adaptation methods often require tuning model parameters, which is not only computationally expensive but also risks degrading the model's pre-existing abilities.To address this, we introduce a lightweight component, Test-Time Steering Vectors (TTSV), which is prepended to the input while keeping the LLM's parameters entirely frozen. By optimizing the TTSV on test data to minimize the model's output entropy, we steer the model towards an internal state of higher confidence, activating its inherent abilities most relevant to the current task. TTSV is both lightweight and highly efficient to optimize, making it a true plug-and-play enhancement. Extensive experiments validate our approach's effectiveness on both base models and reasoning-enhanced models. For instance, on the MATH500 task, TTSV achieves a 45.88% relative performance gain on the Qwen2.5-Math-7B model and a 16.22% relative gain on the Qwen3-4B model. Furthermore, our approach exhibits robust generalization, with its steering vectors proving highly transferable across diverse tasks.
Related papers
- Confidence-Driven Multi-Scale Model Selection for Cost-Efficient Inference [10.009730627424629]
Large Language Models (LLMs) have revolutionized inference across diverse natural language tasks.<n>We propose a confidence-driven strategy that dynamically selects the most suitable model based on confidence estimates.
arXiv Detail & Related papers (2026-02-25T16:38:03Z) - Efficient Test-Time Scaling for Small Vision-Language Models [14.654047034885288]
Small Vision-Language Models (VLMs) provide a computationally efficient alternative to larger models.<n>Existing methods are typically computationally demanding, contradicting the resource-efficient design goals of small models.<n>We propose two novel and efficient test-time scaling strategies that leverage the model-internal features rather than external supervision.
arXiv Detail & Related papers (2025-10-03T23:49:06Z) - Accelerated Test-Time Scaling with Model-Free Speculative Sampling [58.69141724095398]
We introduce STAND (STochastic Adaptive N-gram Drafting), a novel model-free speculative decoding approach.<n>We show that STAND reduces inference latency by 60-65% compared to standard autoregressive decoding.<n>As a model-free approach, STAND can be applied to any existing language model without additional training.
arXiv Detail & Related papers (2025-06-05T07:31:18Z) - EfficientLLaVA:Generalizable Auto-Pruning for Large Vision-language Models [64.18350535770357]
We propose an automatic pruning method for large vision-language models to enhance the efficiency of multimodal reasoning.<n>Our approach only leverages a small number of samples to search for the desired pruning policy.<n>We conduct extensive experiments on the ScienceQA, Vizwiz, MM-vet, and LLaVA-Bench datasets for the task of visual question answering.
arXiv Detail & Related papers (2025-03-19T16:07:04Z) - A Training Rate and Survival Heuristic for Inference and Robustness Evaluation (TRASHFIRE) [1.622320874892682]
This work addresses the problem of understanding and predicting how particular model hyper- parameters influence the performance of a model in the presence of an adversary.
The proposed approach uses survival models, worst-case examples, and a cost-aware analysis to precisely and accurately reject a particular model change.
Using the proposed methodology, we show that ResNet is hopelessly against even the simplest of white box attacks.
arXiv Detail & Related papers (2024-01-24T19:12:37Z) - Astraios: Parameter-Efficient Instruction Tuning Code Large Language
Models [21.17021844323919]
We introduce Astraios, a suite of 28 instruction-tuned OctoCoder models using 7 tuning methods and 4 model sizes up to 16 billion parameters.
We find that FFT leads to the best downstream performance across all scales, and PEFT methods differ significantly in their efficacy based on the model scale.
arXiv Detail & Related papers (2024-01-01T15:30:19Z) - Pruning Large Language Models via Accuracy Predictor [0.0]
Large language models (LLMs) containing tens of billions of parameters (or even more) have demonstrated impressive capabilities in various NLP tasks.
We propose a novel pruning approach: firstly, a training set of a certain number of architecture-accuracy pairs is established, and then a non-neural model is trained as an accuracy predictor.
arXiv Detail & Related papers (2023-09-18T06:38:24Z) - eP-ALM: Efficient Perceptual Augmentation of Language Models [70.47962271121389]
We propose to direct effort to efficient adaptations of existing models, and propose to augment Language Models with perception.
Existing approaches for adapting pretrained models for vision-language tasks still rely on several key components that hinder their efficiency.
We show that by freezing more than 99% of total parameters, training only one linear projection layer, and prepending only one trainable token, our approach (dubbed eP-ALM) significantly outperforms other baselines on VQA and Captioning.
arXiv Detail & Related papers (2023-03-20T19:20:34Z) - METRO: Efficient Denoising Pretraining of Large Scale Autoencoding
Language Models with Model Generated Signals [151.3601429216877]
We present an efficient method of pretraining large-scale autoencoding language models using training signals generated by an auxiliary model.
We propose a recipe, namely "Model generated dEnoising TRaining Objective" (METRO)
The resultant models, METRO-LM, consisting of up to 5.4 billion parameters, achieve new state-of-the-art on the GLUE, SuperGLUE, and SQuAD benchmarks.
arXiv Detail & Related papers (2022-04-13T21:39:15Z) - Sparse MoEs meet Efficient Ensembles [49.313497379189315]
We study the interplay of two popular classes of such models: ensembles of neural networks and sparse mixture of experts (sparse MoEs)
We present Efficient Ensemble of Experts (E$3$), a scalable and simple ensemble of sparse MoEs that takes the best of both classes of models, while using up to 45% fewer FLOPs than a deep ensemble.
arXiv Detail & Related papers (2021-10-07T11:58:35Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.