TreePrompt: Learning to Compose Tree Prompts for Explainable Visual
Grounding
- URL: http://arxiv.org/abs/2305.11497v1
- Date: Fri, 19 May 2023 07:52:22 GMT
- Title: TreePrompt: Learning to Compose Tree Prompts for Explainable Visual
Grounding
- Authors: Chenchi Zhang, Jun Xiao, Lei Chen, Jian Shao, Long Chen
- Abstract summary: We propose a new prompt construction paradigm with explicit explainable ability, named TreePrompt.
Specifically, we first deconstruct a complex sentence into a tree, that is consistent with human reasoning.
Thanks to this step-by-step prompt construction process, each intermediate prompt (i.e., tree node) permits us to understand the reasoning process.
- Score: 17.9785504685384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prompt tuning has achieved great success in transferring the knowledge from
large pretrained vision-language models into downstream tasks, and has
dominated the performance on visual grounding (VG). However, almost all
existing prompt tuning paradigms suffer from poor interpretability. In this
paper, we argue that their poor interpretability is attributed to the holistic
prompt generation and inference process. By "holistic", we mean that they
usually directly learn a set of vectors as the prompt (i.e., prompt
generation), and use the learned global prompt to augment the textual input for
the VG model (i.e., prompt inference). To this end, we propose a new prompt
construction paradigm with explicit explainable ability, named TreePrompt.
Specifically, we first deconstruct a complex sentence into a tree, that is
consistent with human reasoning. Then, following the syntax tree, we compose a
structured prompt in a bottom-up manner. Thanks to this step-by-step prompt
construction process, each intermediate prompt (i.e., tree node) permits us to
understand the reasoning process. Extensive ablations on various backbones and
benchmarks consistently demonstrate the effectiveness and interpretability of
our TreePrompt.
Related papers
- Parse Trees Guided LLM Prompt Compression [20.61121589698341]
We propose a novel selective compression method called PartPrompt.
It first obtains a parse tree for each sentence based on linguistic rules, and calculates local information entropy for each node in a parse tree.
The experiments show that PartPrompt receives the state-of-the-art performance across various datasets.
arXiv Detail & Related papers (2024-09-23T06:21:40Z) - Tree Prompting: Efficient Task Adaptation without Fine-Tuning [112.71020326388029]
Tree Prompting builds a decision tree of prompts, linking multiple LM calls together to solve a task.
Experiments on classification datasets show that Tree Prompting improves accuracy over competing methods and is competitive with fine-tuning.
arXiv Detail & Related papers (2023-10-21T15:18:22Z) - On the Role of Attention in Prompt-tuning [90.97555030446563]
We study prompt-tuning for one-layer attention architectures and study contextual mixture-models.
We show that softmax-prompt-attention is provably more expressive than softmax-self-attention and linear-prompt-attention.
We also provide experiments that verify our theoretical insights on real datasets and demonstrate how prompt-tuning enables the model to attend to context-relevant information.
arXiv Detail & Related papers (2023-06-06T06:23:38Z) - Demystifying Prompts in Language Models via Perplexity Estimation [109.59105230163041]
Performance of a prompt is coupled with the extent to which the model is familiar with the language it contains.
We show that the lower the perplexity of the prompt is, the better the prompt is able to perform the task.
arXiv Detail & Related papers (2022-12-08T02:21:47Z) - Explaining Patterns in Data with Language Models via Interpretable
Autoprompting [143.4162028260874]
We introduce interpretable autoprompting (iPrompt), an algorithm that generates a natural-language string explaining the data.
iPrompt can yield meaningful insights by accurately finding groundtruth dataset descriptions.
Experiments with an fMRI dataset show the potential for iPrompt to aid in scientific discovery.
arXiv Detail & Related papers (2022-10-04T18:32:14Z) - Maieutic Prompting: Logically Consistent Reasoning with Recursive
Explanations [71.2950434944196]
We develop Maieutic Prompting, which infers a correct answer to a question even from the noisy and inconsistent generations of language models.
Maieutic Prompting achieves up to 20% better accuracy than state-of-the-art prompting methods.
arXiv Detail & Related papers (2022-05-24T06:36:42Z) - Instance-aware Prompt Learning for Language Understanding and Generation [49.22899822734549]
We propose an instance-aware prompt learning method that learns a different prompt for each instance.
Our method achieves the state-of-the-art on the SuperGLUE few-shot learning benchmark.
arXiv Detail & Related papers (2022-01-18T17:03:25Z) - Do Prompt-Based Models Really Understand the Meaning of their Prompts? [12.857580576554865]
We find that models learn just as fast with many prompts that are intentionally irrelevant or even pathologically misleading.
We find little evidence that suggests existing prompt-based models truly understand the meaning of their given prompts.
arXiv Detail & Related papers (2021-09-02T23:46:36Z) - Explaining Answers with Entailment Trees [16.555369850015055]
We aim to explain answers by showing how evidence leads to the answer in a systematic way.
Our approach is to generate explanations in the form of entailment trees, namely a tree of entailment steps from facts that are known, through intermediate conclusions, to the final answer.
To train a model with this skill, we created ENTAILMENTBANK, the first dataset to contain multistep entailment trees.
arXiv Detail & Related papers (2021-04-17T23:13:56Z) - Unsupervised Learning of Discourse Structures using a Tree Autoencoder [8.005512864082126]
We propose a new strategy to generate tree structures in a task-agnostic, unsupervised fashion by extending a latent tree induction framework with an auto-encoding objective.
The proposed approach can be applied to any tree objective, such as syntactic parsing, discourse parsing and others.
In this paper we are inferring general tree structures of natural text in multiple domains, showing promising results on a diverse set of tasks.
arXiv Detail & Related papers (2020-12-17T08:40:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.