Structure Guided Prompt: Instructing Large Language Model in Multi-Step
Reasoning by Exploring Graph Structure of the Text
- URL: http://arxiv.org/abs/2402.13415v1
- Date: Tue, 20 Feb 2024 22:56:23 GMT
- Title: Structure Guided Prompt: Instructing Large Language Model in Multi-Step
Reasoning by Exploring Graph Structure of the Text
- Authors: Kewei Cheng, Nesreen K. Ahmed, Theodore Willke, Yizhou Sun
- Abstract summary: This paper introduces Structure Guided Prompt, a framework designed to improve the multi-step reasoning capabilities of Large Language Models (LLMs)
Our experiments show that this framework significantly enhances the reasoning capabilities of LLMs, enabling them to excel in a broader spectrum of natural language scenarios.
- Score: 44.81698187939784
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although Large Language Models (LLMs) excel at addressing straightforward
reasoning tasks, they frequently struggle with difficulties when confronted by
more complex multi-step reasoning due to a range of factors. Firstly, natural
language often encompasses complex relationships among entities, making it
challenging to maintain a clear reasoning chain over longer spans. Secondly,
the abundance of linguistic diversity means that the same entities and
relationships can be expressed using different terminologies and structures,
complicating the task of identifying and establishing connections between
multiple pieces of information. Graphs provide an effective solution to
represent data rich in relational information and capture long-term
dependencies among entities. To harness the potential of graphs, our paper
introduces Structure Guided Prompt, an innovative three-stage task-agnostic
prompting framework designed to improve the multi-step reasoning capabilities
of LLMs in a zero-shot setting. This framework explicitly converts unstructured
text into a graph via LLMs and instructs them to navigate this graph using
task-specific strategies to formulate responses. By effectively organizing
information and guiding navigation, it enables LLMs to provide more accurate
and context-aware responses. Our experiments show that this framework
significantly enhances the reasoning capabilities of LLMs, enabling them to
excel in a broader spectrum of natural language scenarios.
Related papers
- NT-LLM: A Novel Node Tokenizer for Integrating Graph Structure into Large Language Models [26.739650151993928]
Graphs are a fundamental data structure for representing relationships in real-world scenarios.
Applying Large Language Models (LLMs) to graph-related tasks poses significant challenges.
We introduce Node Tokenizer for Large Language Models (NT-LLM), a novel framework that efficiently encodes graph structures.
arXiv Detail & Related papers (2024-10-14T17:21:57Z) - Scalable Representation Learning for Multimodal Tabular Transactions [14.18267117657451]
We present an innovative and scalable solution to these challenges.
We propose a parameter efficient decoder that interleaves transaction and text modalities.
We validate the efficacy of our solution on a large-scale dataset of synthetic payments transactions.
arXiv Detail & Related papers (2024-10-10T12:18:42Z) - Large Language Models are Good Multi-lingual Learners : When LLMs Meet Cross-lingual Prompts [5.520335305387487]
We propose a novel prompting strategy Multi-Lingual Prompt, namely MLPrompt.
MLPrompt translates the error-prone rule that an LLM struggles to follow into another language, thus drawing greater attention to it.
We introduce a framework integrating MLPrompt with an auto-checking mechanism for structured data generation, with a specific case study in text-to-MIP instances.
arXiv Detail & Related papers (2024-09-17T10:33:27Z) - Struct-X: Enhancing Large Language Models Reasoning with Structured Data [38.558614152006975]
Struct-X operates through five key phases: read-model-fill-reflect-reason''
It encodes structured data into a topological space using graph embeddings.
It fills in missing entity information with knowledge retrieval modules.
The final phase involves constructing a topological network with selected tokens.
arXiv Detail & Related papers (2024-07-17T13:06:25Z) - RelationVLM: Making Large Vision-Language Models Understand Visual Relations [66.70252936043688]
We present RelationVLM, a large vision-language model capable of comprehending various levels and types of relations whether across multiple images or within a video.
Specifically, we devise a multi-stage relation-aware training scheme and a series of corresponding data configuration strategies to bestow RelationVLM with the capabilities of understanding semantic relations.
arXiv Detail & Related papers (2024-03-19T15:01:19Z) - Integrating Large Language Models with Graphical Session-Based
Recommendation [8.086277931395212]
We introduce large language models with graphical Session-Based recommendation, named LLMGR.
This framework bridges the gap by harmoniously integrating LLMs with Graph Neural Networks (GNNs) for SBR tasks.
This integration seeks to leverage the complementary strengths of LLMs in natural language understanding and GNNs in relational data processing.
arXiv Detail & Related papers (2024-02-26T12:55:51Z) - kNN-ICL: Compositional Task-Oriented Parsing Generalization with Nearest
Neighbor In-Context Learning [50.40636157214161]
Task-Oriented Parsing (TOP) enables conversational assistants to interpret user commands expressed in natural language.
LLMs have achieved impressive performance in computer programs based on a natural language prompt.
This paper focuses on harnessing the capabilities of LLMs for semantic parsing tasks.
arXiv Detail & Related papers (2023-12-17T17:26:50Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Disentangled Representation Learning with Large Language Models for
Text-Attributed Graphs [57.052160123387104]
We present the Disentangled Graph-Text Learner (DGTL) model, which is able to enhance the reasoning and predicting capabilities of LLMs for TAGs.
Our proposed DGTL model incorporates graph structure information through tailored disentangled graph neural network (GNN) layers.
Experimental evaluations demonstrate the effectiveness of the proposed DGTL model on achieving superior or comparable performance over state-of-the-art baselines.
arXiv Detail & Related papers (2023-10-27T14:00:04Z) - Improving Open Information Extraction with Large Language Models: A
Study on Demonstration Uncertainty [52.72790059506241]
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text.
Despite the potential of large language models (LLMs) like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks.
arXiv Detail & Related papers (2023-09-07T01:35:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.