StructGPT: A General Framework for Large Language Model to Reason over
Structured Data
- URL: http://arxiv.org/abs/2305.09645v2
- Date: Mon, 23 Oct 2023 07:51:23 GMT
- Title: StructGPT: A General Framework for Large Language Model to Reason over
Structured Data
- Authors: Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao and
Ji-Rong Wen
- Abstract summary: We develop an emphIterative Reading-then-Reasoning(IRR) approach for solving question answering tasks based on structured data.
Our approach can significantly boost the performance of ChatGPT and achieve comparable performance against the full-data supervised-tuning baselines.
- Score: 117.13986738340027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study how to improve the zero-shot reasoning ability of
large language models~(LLMs) over structured data in a unified way. Inspired by
the study on tool augmentation for LLMs, we develop an \emph{Iterative
Reading-then-Reasoning~(IRR)} approach for solving question answering tasks
based on structured data, called \textbf{StructGPT}. In our approach, we
construct the specialized function to collect relevant evidence from structured
data (\ie \emph{reading}), and let LLMs concentrate the reasoning task based on
the collected information (\ie \emph{reasoning}). Specially, we propose an
\emph{invoking-linearization-generation} procedure to support LLMs in reasoning
on the structured data with the help of the external interfaces. By iterating
this procedures with provided interfaces, our approach can gradually approach
the target answer to a given query. Extensive experiments conducted on three
types of structured data demonstrate the effectiveness of our approach, which
can significantly boost the performance of ChatGPT and achieve comparable
performance against the full-data supervised-tuning baselines. Our codes and
data are publicly available at~\url{https://github.com/RUCAIBox/StructGPT}.
Related papers
- On The Role of Prompt Construction In Enhancing Efficacy and Efficiency of LLM-Based Tabular Data Generation [16.79923685316516]
We explore three prompt construction protocols: Expert-guided, LLM-guided, and Novel-Mapping.
We find that context-enriched prompts lead to significantly improved data generation quality and training efficiency.
arXiv Detail & Related papers (2024-09-06T00:02:09Z) - Enhancing LLM's Cognition via Structurization [41.13997892843677]
Large language models (LLMs) process input contexts through a causal and sequential perspective.
This paper presents a novel concept of context structurization.
Specifically, we transform the plain, unordered contextual sentences into well-ordered and hierarchically structurized elements.
arXiv Detail & Related papers (2024-07-23T12:33:58Z) - Struct-X: Enhancing Large Language Models Reasoning with Structured Data [38.558614152006975]
Struct-X operates through five key phases: read-model-fill-reflect-reason''
It encodes structured data into a topological space using graph embeddings.
It fills in missing entity information with knowledge retrieval modules.
The final phase involves constructing a topological network with selected tokens.
arXiv Detail & Related papers (2024-07-17T13:06:25Z) - StructLM: Towards Building Generalist Models for Structured Knowledge Grounding [49.10029030628653]
Large language models' (LLMs) ability to process structured data lags behind state-of-the-art (SoTA) model by an average of 35%.
We train a series of models, referred to as StructLM, based on the Mistral and the CodeLlama model family, ranging from 7B to 34B parameters.
Our StructLM series surpasses task-specific models on 16 out of 18 evaluated datasets and establishes new SoTA performance on 8 SKG tasks.
arXiv Detail & Related papers (2024-02-26T15:47:01Z) - Leveraging Large Language Models for Structure Learning in Prompted Weak
Supervision [24.866270447991752]
We show that our Structure Refining Module improves the PromptedWS pipeline by up to 12.7 points on the benchmark tasks.
We also explore the trade-offs between efficiency and performance with comprehensive ablation experiments and analysis.
arXiv Detail & Related papers (2024-02-02T19:45:39Z) - Guiding Language Model Reasoning with Planning Tokens [122.43639723387516]
Large language models (LLMs) have recently attracted considerable interest for their ability to perform complex reasoning tasks.
We propose a hierarchical generation scheme to encourage a more structural generation of chain-of-thought steps.
Our approach requires a negligible increase in trainable parameters (0.001%) and can be applied through either full fine-tuning or a more parameter-efficient scheme.
arXiv Detail & Related papers (2023-10-09T13:29:37Z) - Struc-Bench: Are Large Language Models Really Good at Generating Complex Structured Data? [49.688233418425995]
Struc-Bench is a comprehensive benchmark featuring prominent Large Language Models (LLMs)
We propose two innovative metrics, P-Score (Prompting Score) and H-Score (Heuristical Score)
Our experiments show that applying our structure-aware fine-tuning to LLaMA-7B leads to substantial performance gains.
arXiv Detail & Related papers (2023-09-16T11:31:58Z) - Improving Open Information Extraction with Large Language Models: A
Study on Demonstration Uncertainty [52.72790059506241]
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text.
Despite the potential of large language models (LLMs) like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks.
arXiv Detail & Related papers (2023-09-07T01:35:24Z) - Instruction Tuning for Large Language Models: A Survey [52.86322823501338]
We make a systematic review of the literature, including the general methodology of IT, the construction of IT datasets, the training of IT models, and applications to different modalities, domains and applications.
We also review the potential pitfalls of IT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies and suggest some avenues for fruitful research.
arXiv Detail & Related papers (2023-08-21T15:35:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.