Test-Time Personalization with Meta Prompt for Gaze Estimation
- URL: http://arxiv.org/abs/2401.01577v3
- Date: Tue, 12 Mar 2024 19:06:06 GMT
- Title: Test-Time Personalization with Meta Prompt for Gaze Estimation
- Authors: Huan Liu, Julia Qi, Zhenhao Li, Mohammad Hassanpour, Yang Wang,
Konstantinos Plataniotis, Yuanhao Yu
- Abstract summary: We take inspiration from the recent advances in Natural Language Processing (NLP) by updating a negligible number of parameters, "prompts", at the test time.
We propose to meta-learn the prompt to ensure that its updates align with the goal.
Our experiments show that the meta-learned prompt can be effectively adapted even with a simple symmetry loss.
- Score: 23.01057994927244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the recent remarkable achievement in gaze estimation, efficient and
accurate personalization of gaze estimation without labels is a practical
problem but rarely touched on in the literature. To achieve efficient
personalization, we take inspiration from the recent advances in Natural
Language Processing (NLP) by updating a negligible number of parameters,
"prompts", at the test time. Specifically, the prompt is additionally attached
without perturbing original network and can contain less than 1% of a
ResNet-18's parameters. Our experiments show high efficiency of the prompt
tuning approach. The proposed one can be 10 times faster in terms of adaptation
speed than the methods compared. However, it is non-trivial to update the
prompt for personalized gaze estimation without labels. At the test time, it is
essential to ensure that the minimizing of particular unsupervised loss leads
to the goals of minimizing gaze estimation error. To address this difficulty,
we propose to meta-learn the prompt to ensure that its updates align with the
goal. Our experiments show that the meta-learned prompt can be effectively
adapted even with a simple symmetry loss. In addition, we experiment on four
cross-dataset validations to show the remarkable advantages of the proposed
method. Code is available at https://github.com/hmarkamcan/TPGaze.
Related papers
- ELF-UA: Efficient Label-Free User Adaptation in Gaze Estimation [14.265464822002924]
Our goal is to provide a personalized gaze estimation model specifically adapted to a target user.
Previous work requires some labeled images of the target person data to fine-tune the model at test time.
Our proposed method uses a meta-learning approach to learn how to adapt to a new user with only a few unlabeled images.
arXiv Detail & Related papers (2024-06-13T13:00:33Z) - GLaPE: Gold Label-agnostic Prompt Evaluation and Optimization for Large
Language Model [66.86722460851968]
We propose a gold label-agnostic prompt evaluation (GLaPE) to alleviate dependence on gold labels.
We show that GLaPE provides reliable evaluations with accuracy, even in the absence of gold labels.
On six popular reasoning tasks, our GLaPE-based prompt optimization yields effective prompts comparable to accuracy-based ones.
arXiv Detail & Related papers (2024-02-04T08:57:54Z) - Revisiting the Power of Prompt for Visual Tuning [50.11465784194896]
This study explores the correlation evolvement between prompts and patch tokens during proficient training.
Inspired by the observation that the prompt tokens tend to share high mutual information with patch tokens, we propose initializing prompts with downstream token prototypes.
Our method significantly advances the adaptation for self-supervised pretraining, achieving impressive task performance gains of at least 10% to 30%.
arXiv Detail & Related papers (2024-02-04T07:49:02Z) - SLPT: Selective Labeling Meets Prompt Tuning on Label-Limited Lesion
Segmentation [57.37875162629063]
We propose a framework that combines selective labeling with prompt tuning to boost performance in limited labels.
We evaluate our method on liver tumor segmentation and achieve state-of-the-art performance, outperforming traditional fine-tuning with only 6% of tunable parameters.
arXiv Detail & Related papers (2023-08-09T12:22:49Z) - Parameter-Efficient Fine-Tuning without Introducing New Latency [7.631596468553607]
We introduce a novel adapter technique that directly applies the adapter to pre-trained parameters instead of the hidden representation.
Our proposed method attains a new state-of-the-art outcome in terms of both performance and storage efficiency, storing only 0.03% parameters of full fine-tuning.
arXiv Detail & Related papers (2023-05-26T08:44:42Z) - Prompt Consistency for Zero-Shot Task Generalization [118.81196556175797]
In this paper, we explore methods to utilize unlabeled data to improve zero-shot performance.
Specifically, we take advantage of the fact that multiple prompts can be used to specify a single task, and propose to regularize prompt consistency.
Our approach outperforms the state-of-the-art zero-shot learner, T0, on 9 out of 11 datasets across 4 NLP tasks by up to 10.6 absolute points in terms of accuracy.
arXiv Detail & Related papers (2022-04-29T19:18:37Z) - Overfitting in Bayesian Optimization: an empirical study and
early-stopping solution [41.782410830989136]
We propose the first problem-adaptive and interpretable criterion to early stop BO.
We show that our approach can substantially reduce compute time with little to no loss of test accuracy.
arXiv Detail & Related papers (2021-04-16T15:26:23Z) - How much progress have we made in neural network training? A New
Evaluation Protocol for Benchmarking Optimizers [86.36020260204302]
We propose a new benchmarking protocol to evaluate both end-to-end efficiency and data-addition training efficiency.
A human study is conducted to show that our evaluation protocol matches human tuning behavior better than the random search.
We then apply the proposed benchmarking framework to 7s and various tasks, including computer vision, natural language processing, reinforcement learning, and graph mining.
arXiv Detail & Related papers (2020-10-19T21:46:39Z) - Information-Theoretic Probing with Minimum Description Length [74.29846942213445]
We propose an alternative to the standard probes, information-theoretic probing with minimum description length (MDL)
With MDL probing, training a probe to predict labels is recast as teaching it to effectively transmit the data.
We show that these methods agree in results and are more informative and stable than the standard probes.
arXiv Detail & Related papers (2020-03-27T09:35:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.