When Emotional Stimuli meet Prompt Designing: An Auto-Prompt Graphical Paradigm
- URL: http://arxiv.org/abs/2404.10500v1
- Date: Tue, 16 Apr 2024 12:19:08 GMT
- Title: When Emotional Stimuli meet Prompt Designing: An Auto-Prompt Graphical Paradigm
- Authors: Chenggian Ma, Xiangyu Zhao, Chunhui Zhang, Yanzhao Qin, Wentao Zhang,
- Abstract summary: This paper summarizes the prompt words for large language models (LLMs)
It then proposes an Auto-Prompt Graphical Paradigm(APGP) that combines both stimulating and framework prompts.
The framework involves automated prompt generation and consideration of emotion-stimulus factors.
- Score: 43.2625101868969
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development of Large Language Models (LLM), numerous prompts have been proposed, each with a rich set of features and their own merits. This paper summarizes the prompt words for large language models (LLMs), categorizing them into stimulating and framework types, and proposes an Auto-Prompt Graphical Paradigm(APGP) that combines both stimulating and framework prompts to enhance the problem-solving capabilities of LLMs across multiple domains, then exemplifies it with a framework that adheres to this paradigm. The framework involves automated prompt generation and consideration of emotion-stimulus factors, guiding LLMs in problem abstraction, diversified solutions generation, comprehensive optimization, and self-verification after providing answers, ensuring solution accuracy. Compared to traditional stimuli and framework prompts, this framework integrates the advantages of both by adopting automated approaches inspired by APE work, overcoming the limitations of manually designed prompts. Test results on the ruozhiba and BBH datasets demonstrate that this framework can effectively improve the efficiency and accuracy of LLMs in problem-solving, paving the way for new applications of LLMs.
Related papers
- Self-Instructed Derived Prompt Generation Meets In-Context Learning: Unlocking New Potential of Black-Box LLMs [30.333277284839053]
Large language models (LLMs) have shown success in generating high-quality responses.
Existing methods to enhance response quality often involve a prompt refinement model.
We introduce a self-instructed in-context learning framework that empowers LLMs to deliver more effective responses.
arXiv Detail & Related papers (2024-09-03T02:42:39Z) - GANPrompt: Enhancing Robustness in LLM-Based Recommendations with GAN-Enhanced Diversity Prompts [15.920623515602038]
This paper proposes GANPrompt, a multi-dimensional large language model prompt diversity framework based on Generative Adversarial Networks (GANs)
GANPrompt first trains a generator capable of producing diverse prompts by analysing multidimensional user behavioural data.
These diverse prompts are then used to train the LLM to improve its performance in the face of unseen prompts.
arXiv Detail & Related papers (2024-08-19T03:13:20Z) - Optimal Decision Making Through Scenario Simulations Using Large Language Models [0.0]
Large Language Models (LLMs) have transformed how complex problems are approached and solved.
This paper proposes an innovative approach to bridge this capability gap.
By enabling LLMs to request multiple potential options and their respective parameters from users, our system introduces a dynamic framework.
This function is designed to analyze the provided options, simulate potential outcomes, and determine the most advantageous solution.
arXiv Detail & Related papers (2024-07-09T01:23:09Z) - Meta Reasoning for Large Language Models [58.87183757029041]
We introduce Meta-Reasoning Prompting (MRP), a novel and efficient system prompting method for large language models (LLMs)
MRP guides LLMs to dynamically select and apply different reasoning methods based on the specific requirements of each task.
We evaluate the effectiveness of MRP through comprehensive benchmarks.
arXiv Detail & Related papers (2024-06-17T16:14:11Z) - Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning [79.38140606606126]
We propose an algorithmic framework that fine-tunes vision-language models (VLMs) with reinforcement learning (RL)
Our framework provides a task description and then prompts the VLM to generate chain-of-thought (CoT) reasoning.
We demonstrate that our proposed framework enhances the decision-making capabilities of VLM agents across various tasks.
arXiv Detail & Related papers (2024-05-16T17:50:19Z) - Adapting LLMs for Efficient Context Processing through Soft Prompt Compression [1.1550486371582305]
This article presents an innovative framework that strategically tailors Large Language Models for streamlined context processing.
Our methodology, dubbed SoftPromptComp, amalgamates natural language prompts with dynamically generated soft prompts to forge a concise yet semantically robust depiction of protracted contexts.
We substantiate that our framework markedly diminishes computational overhead and enhances LLMs' efficacy across various benchmarks.
arXiv Detail & Related papers (2024-04-07T15:44:20Z) - Solution-oriented Agent-based Models Generation with Verifier-assisted
Iterative In-context Learning [10.67134969207797]
Agent-based models (ABMs) stand as an essential paradigm for proposing and validating hypothetical solutions or policies.
Large language models (LLMs) encapsulating cross-domain knowledge and programming proficiency could potentially alleviate the difficulty of this process.
We present SAGE, a general solution-oriented ABM generation framework designed for automatic modeling and generating solutions for targeted problems.
arXiv Detail & Related papers (2024-02-04T07:59:06Z) - Connecting Large Language Models with Evolutionary Algorithms Yields
Powerful Prompt Optimizers [70.18534453485849]
EvoPrompt is a framework for discrete prompt optimization.
It borrows the idea of evolutionary algorithms (EAs) as they exhibit good performance and fast convergence.
It significantly outperforms human-engineered prompts and existing methods for automatic prompt generation.
arXiv Detail & Related papers (2023-09-15T16:50:09Z) - Automatically Correcting Large Language Models: Surveying the landscape
of diverse self-correction strategies [104.32199881187607]
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tasks.
A promising approach to rectify these flaws is self-correction, where the LLM itself is prompted or guided to fix problems in its own output.
This paper presents a comprehensive review of this emerging class of techniques.
arXiv Detail & Related papers (2023-08-06T18:38:52Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.