HUB: Guiding Learned Optimizers with Continuous Prompt Tuning
- URL: http://arxiv.org/abs/2305.16823v2
- Date: Wed, 31 May 2023 16:33:35 GMT
- Title: HUB: Guiding Learned Optimizers with Continuous Prompt Tuning
- Authors: Gaole Dai, Wei Wu, Ziyu Wang, Jie Fu, Shanghang Zhang, Tiejun Huang
- Abstract summary: Learneds are a crucial component of meta-learning.
Recent advancements in scalable learneds have demonstrated their superior performance over hand-designeds in various tasks.
We propose a hybrid-update-based (HUB) optimization strategy to tackle the issue of generalization in scalable learneds.
- Score: 45.662334160254176
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learned optimizers are a crucial component of meta-learning. Recent
advancements in scalable learned optimizers have demonstrated their superior
performance over hand-designed optimizers in various tasks. However, certain
characteristics of these models, such as an unstable learning curve, limited
ability to handle unseen tasks and network architectures, difficult-to-control
behaviours, and poor performance in fine-tuning tasks impede their widespread
adoption. To tackle the issue of generalization in scalable learned optimizers,
we propose a hybrid-update-based (HUB) optimization strategy inspired by recent
advancements in hard prompt tuning and result selection techniques used in
large language and vision models. This approach can be easily applied to any
task that involves hand-designed or learned optimizer. By incorporating
hand-designed optimizers as the second component in our hybrid approach, we are
able to retain the benefits of learned optimizers while stabilizing the
training process and, more importantly, improving testing performance. We
validate our design through a total of 17 tasks, consisting of thirteen
training from scratch and four fine-tuning settings. These tasks vary in model
sizes, architectures, or dataset sizes, and the competing optimizers are
hyperparameter-tuned. We outperform all competitors in 94% of the tasks with
better testing performance. Furthermore, we conduct a theoretical analysis to
examine the potential impact of our hybrid strategy on the behaviours and
inherited traits of learned optimizers.
Related papers
- CoRe Optimizer: An All-in-One Solution for Machine Learning [0.0]
Continuously resilient convergence (CoRe) shown superior performance compared to other state-of-the-art first-order gradient-based convergence algorithms.
CoRe yields best or competitive performance in every investigated application.
arXiv Detail & Related papers (2023-07-28T16:48:42Z) - Learning to Optimize for Reinforcement Learning [58.01132862590378]
Reinforcement learning (RL) is essentially different from supervised learning, and in practice, these learneds do not work well even in simple RL tasks.
Agent-gradient distribution is non-independent and identically distributed, leading to inefficient meta-training.
We show that, although only trained in toy tasks, our learned can generalize unseen complex tasks in Brax.
arXiv Detail & Related papers (2023-02-03T00:11:02Z) - VeLO: Training Versatile Learned Optimizers by Scaling Up [67.90237498659397]
We leverage the same scaling approach behind the success of deep learning to learn versatiles.
We train an ingest for deep learning which is itself a small neural network that ingests and outputs parameter updates.
We open source our learned, meta-training code, the associated train test data, and an extensive benchmark suite with baselines at velo-code.io.
arXiv Detail & Related papers (2022-11-17T18:39:07Z) - A Closer Look at Learned Optimization: Stability, Robustness, and
Inductive Biases [44.01339030872185]
Blackbox learneds often struggle with stability and generalization when applied to tasks unlike those in their meta-training set.
We investigate the inductive biases and stability properties of optimization algorithms, and apply the resulting insights to designing inductive biases for blackboxs.
We learn to a variety of neural network training tasks, where it outperforms the current state of the art learned.
arXiv Detail & Related papers (2022-09-22T17:47:21Z) - Training Learned Optimizers with Randomly Initialized Learned Optimizers [49.67678615506608]
We show that a population of randomly learneds can be used to train themselves from scratch in an online fashion.
A form of population based training is used to orchestrate this self-training.
We believe feedback loops of this type will be important and powerful in the future of machine learning.
arXiv Detail & Related papers (2021-01-14T19:07:17Z) - Reverse engineering learned optimizers reveals known and novel
mechanisms [50.50540910474342]
Learneds are algorithms that can themselves be trained to solve optimization problems.
Our results help elucidate the previously murky understanding of how learneds work, and establish tools for interpreting future learneds.
arXiv Detail & Related papers (2020-11-04T07:12:43Z) - Tasks, stability, architecture, and compute: Training more effective
learned optimizers, and using them to train themselves [53.37905268850274]
We introduce a new, hierarchical, neural network parameterized, hierarchical with access to additional features such as validation loss to enable automatic regularization.
Most learneds have been trained on only a single task, or a small number of tasks.
We train ours on thousands of tasks, making use of orders of magnitude more compute, resulting in generalizes that perform better to unseen tasks.
arXiv Detail & Related papers (2020-09-23T16:35:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.