Tree Prompting: Efficient Task Adaptation without Fine-Tuning
- URL: http://arxiv.org/abs/2310.14034v1
- Date: Sat, 21 Oct 2023 15:18:22 GMT
- Title: Tree Prompting: Efficient Task Adaptation without Fine-Tuning
- Authors: John X. Morris, Chandan Singh, Alexander M. Rush, Jianfeng Gao,
Yuntian Deng
- Abstract summary: Tree Prompting builds a decision tree of prompts, linking multiple LM calls together to solve a task.
Experiments on classification datasets show that Tree Prompting improves accuracy over competing methods and is competitive with fine-tuning.
- Score: 112.71020326388029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prompting language models (LMs) is the main interface for applying them to
new tasks. However, for smaller LMs, prompting provides low accuracy compared
to gradient-based finetuning. Tree Prompting is an approach to prompting which
builds a decision tree of prompts, linking multiple LM calls together to solve
a task. At inference time, each call to the LM is determined by efficiently
routing the outcome of the previous call using the tree. Experiments on
classification datasets show that Tree Prompting improves accuracy over
competing methods and is competitive with fine-tuning. We also show that
variants of Tree Prompting allow inspection of a model's decision-making
process.
Related papers
- Parse Trees Guided LLM Prompt Compression [20.61121589698341]
We propose a novel selective compression method called PartPrompt.
It first obtains a parse tree for each sentence based on linguistic rules, and calculates local information entropy for each node in a parse tree.
The experiments show that PartPrompt receives the state-of-the-art performance across various datasets.
arXiv Detail & Related papers (2024-09-23T06:21:40Z) - LiteSearch: Efficacious Tree Search for LLM [70.29796112457662]
This study introduces a novel guided tree search algorithm with dynamic node selection and node-level exploration budget.
Experiments conducted on the GSM8K and TabMWP datasets demonstrate that our approach enjoys significantly lower computational costs compared to baseline methods.
arXiv Detail & Related papers (2024-06-29T05:14:04Z) - Prompt Exploration with Prompt Regression [38.847668543140315]
We propose a framework, Prompt Exploration with Prompt Regression (PEPR), to predict the effect of prompt combinations given results for individual prompt elements.
We evaluate our approach with open-source LLMs of different sizes on several different tasks.
arXiv Detail & Related papers (2024-05-17T20:30:49Z) - Autonomous Tree-search Ability of Large Language Models [58.68735916408101]
Large Language Models have excelled in remarkable reasoning capabilities with advanced prompting techniques.
Recent works propose to utilize external programs to define search logic, such that LLMs can perform passive tree search to solve more challenging reasoning tasks.
We propose a new concept called autonomous tree-search ability of LLM, which can automatically generate a response containing search trajectories for the correct answer.
arXiv Detail & Related papers (2023-10-14T14:14:38Z) - Tree-Planner: Efficient Close-loop Task Planning with Large Language Models [63.06270302774049]
Tree-Planner reframes task planning with Large Language Models into three distinct phases.
Tree-Planner achieves state-of-the-art performance while maintaining high efficiency.
arXiv Detail & Related papers (2023-10-12T17:59:50Z) - TreeDQN: Learning to minimize Branch-and-Bound tree [78.52895577861327]
Branch-and-Bound is a convenient approach to solving optimization tasks in the form of Mixed Linear Programs.
The efficiency of the solver depends on the branchning used to select a variable for splitting.
We propose a reinforcement learning method that can efficiently learn the branching.
arXiv Detail & Related papers (2023-06-09T14:01:26Z) - TreePrompt: Learning to Compose Tree Prompts for Explainable Visual
Grounding [17.9785504685384]
We propose a new prompt construction paradigm with explicit explainable ability, named TreePrompt.
Specifically, we first deconstruct a complex sentence into a tree, that is consistent with human reasoning.
Thanks to this step-by-step prompt construction process, each intermediate prompt (i.e., tree node) permits us to understand the reasoning process.
arXiv Detail & Related papers (2023-05-19T07:52:22Z) - METGEN: A Module-Based Entailment Tree Generation Framework for Answer
Explanation [59.33241627273023]
We propose METGEN, a Module-based Entailment Tree GEN framework that has multiple modules and a reasoning controller.
Given a question, METGEN can iteratively generate the entailment tree by conducting single-step entailment with separate modules and selecting the reasoning flow with the controller.
Experiment results show that METGEN can outperform previous state-of-the-art models with only 9% of the parameters.
arXiv Detail & Related papers (2022-05-05T12:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.