A Survey of Table Reasoning with Large Language Models
- URL: http://arxiv.org/abs/2402.08259v1
- Date: Tue, 13 Feb 2024 07:17:52 GMT
- Title: A Survey of Table Reasoning with Large Language Models
- Authors: Xuanliang Zhang, Dingzirui Wang, Longxu Dou, Qingfu Zhu, Wanxiang Che
- Abstract summary: Using Large Language Models (LLMs) has become the mainstream method for table reasoning.
We analyze the mainstream techniques used to improve table reasoning performance in the LLM era.
We provide research directions from both the improvement of existing methods and the expansion of practical applications.
- Score: 55.2326738851157
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Table reasoning, which aims to generate the corresponding answer to the
question following the user requirement according to the provided table, and
optionally a text description of the table, effectively improving the
efficiency of obtaining information. Recently, using Large Language Models
(LLMs) has become the mainstream method for table reasoning, because it not
only significantly reduces the annotation cost but also exceeds the performance
of previous methods. However, existing research still lacks a summary of
LLM-based table reasoning works. Due to the existing lack of research,
questions about which techniques can improve table reasoning performance in the
era of LLMs, why LLMs excel at table reasoning, and how to enhance table
reasoning abilities in the future, remain largely unexplored. This gap
significantly limits progress in research. To answer the above questions and
advance table reasoning research with LLMs, we present this survey to analyze
existing research, inspiring future work. In this paper, we analyze the
mainstream techniques used to improve table reasoning performance in the LLM
era, and the advantages of LLMs compared to pre-LLMs for solving table
reasoning. We provide research directions from both the improvement of existing
methods and the expansion of practical applications to inspire future research.
Related papers
- TableRAG: Million-Token Table Understanding with Language Models [53.039560091592215]
TableRAG is a Retrieval-Augmented Generation (RAG) framework specifically designed for LM-based table understanding.
TableRAG leverages query expansion combined with schema and cell retrieval to pinpoint crucial information before providing it to the LMs.
Our results demonstrate that TableRAG achieves the highest retrieval quality, leading to the new state-of-the-art performance on large-scale table understanding.
arXiv Detail & Related papers (2024-10-07T04:15:02Z) - Enhancing Temporal Understanding in LLMs for Semi-structured Tables [50.59009084277447]
We conduct a comprehensive analysis of temporal datasets to pinpoint the specific limitations of large language models (LLMs)
Our investigation leads to enhancements in TempTabQA, a dataset specifically designed for temporal temporal question answering.
We introduce a novel approach, C.L.E.A.R. to strengthen LLM capabilities in this domain.
arXiv Detail & Related papers (2024-07-22T20:13:10Z) - ALTER: Augmentation for Large-Table-Based Reasoning [5.164923314261229]
ALTER(Augmentation for Large-Table-Based Reasoning) is a framework designed to harness the latent augmentation potential in both free-form natural language (NL) questions.
By utilizing only a small subset of relevant data from the table, ALTER achieves outstanding performance on table-based reasoning benchmarks.
arXiv Detail & Related papers (2024-07-03T12:34:45Z) - Look Before You Decide: Prompting Active Deduction of MLLMs for Assumptive Reasoning [68.83624133567213]
We show that most prevalent MLLMs can be easily fooled by the introduction of a presupposition into the question.
We also propose a simple yet effective method, Active Deduction (AD), to encourage the model to actively perform composite deduction.
arXiv Detail & Related papers (2024-04-19T15:53:27Z) - Chain-of-Table: Evolving Tables in the Reasoning Chain for Table
Understanding [79.9461269253121]
We propose the Chain-of-Table framework, where tabular data is explicitly used in the reasoning chain as a proxy for intermediate thoughts.
Chain-of-Table achieves new state-of-the-art performance on WikiTQ, FeTaQA, and TabFact benchmarks.
arXiv Detail & Related papers (2024-01-09T07:46:26Z) - TAP4LLM: Table Provider on Sampling, Augmenting, and Packing Semi-structured Data for Large Language Model Reasoning [55.33939289989238]
We propose TAP4LLM as a versatile pre-processor suite for leveraging large language models (LLMs) in table-based tasks effectively.
It covers several distinct components: (1) table sampling to decompose large tables into manageable sub-tables based on query semantics, (2) table augmentation to enhance tables with additional knowledge from external sources or models, and (3) table packing & serialization to convert tables into various formats suitable for LLMs' understanding.
arXiv Detail & Related papers (2023-12-14T15:37:04Z) - Effective Distillation of Table-based Reasoning Ability from LLMs [23.35522261002175]
Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of natural language processing tasks.
Their enormous parameter size and extremely high requirements for compute power pose challenges for their practical deployment.
Recent research has revealed that specific capabilities of LLMs, such as numerical reasoning, can be transferred to smaller models through distillation.
arXiv Detail & Related papers (2023-09-22T21:15:28Z) - Large Language Models are few(1)-shot Table Reasoners [31.036914270008978]
Large language models (LLMs) are generally excellent few-shot reasoners to solve text reasoning tasks.
In this paper, we aim at understanding how well LLMs can perform on table tasks with few-shot in-context learning.
arXiv Detail & Related papers (2022-10-13T04:08:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.