Reasoning over Hybrid Chain for Table-and-Text Open Domain QA
- URL: http://arxiv.org/abs/2201.05880v1
- Date: Sat, 15 Jan 2022 16:11:55 GMT
- Title: Reasoning over Hybrid Chain for Table-and-Text Open Domain QA
- Authors: Wanjun Zhong, Junjie Huang, Qian Liu, Ming Zhou, Jiahai Wang, Jian
Yin, and Nan Duan
- Abstract summary: We propose a ChAin-centric Reasoning and Pre-training framework (CARP)
CARP utilizes hybrid chain to model the explicit intermediate reasoning process across table and text for question answering.
We also propose a novel chain-centric pre-training method, to enhance the pre-trained model in identifying the cross-modality reasoning process.
- Score: 69.8436986668218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tabular and textual question answering requires systems to perform reasoning
over heterogeneous information, considering table structure, and the
connections among table and text. In this paper, we propose a ChAin-centric
Reasoning and Pre-training framework (CARP). CARP utilizes hybrid chain to
model the explicit intermediate reasoning process across table and text for
question answering. We also propose a novel chain-centric pre-training method,
to enhance the pre-trained model in identifying the cross-modality reasoning
process and alleviating the data sparsity problem. This method constructs the
large-scale reasoning corpus by synthesizing pseudo heterogeneous reasoning
paths from Wikipedia and generating corresponding questions. We evaluate our
system on OTT-QA, a large-scale table-and-text open-domain question answering
benchmark, and our system achieves the state-of-the-art performance. Further
analyses illustrate that the explicit hybrid chain offers substantial
performance improvement and interpretablity of the intermediate reasoning
process, and the chain-centric pre-training boosts the performance on the chain
extraction.
Related papers
- H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution [29.34028569245905]
We formalize the decision-making process of the baseline ECR system using a Structural Causal Model (SCM)
We develop a rationale-centric counterfactual data augmentation method with LLM-in-the-loop.
Our approach achieves state-of-the-art performance on three popular cross-document ECR benchmarks and demonstrates robustness in out-of-domain scenarios.
arXiv Detail & Related papers (2024-04-02T13:15:07Z) - ChainLM: Empowering Large Language Models with Improved Chain-of-Thought Prompting [124.69672273754144]
Chain-of-Thought (CoT) prompting can enhance the reasoning capabilities of large language models (LLMs)
Existing CoT approaches usually focus on simpler reasoning tasks and thus result in low-quality and inconsistent CoT prompts.
We introduce CoTGenius, a novel framework designed for the automatic generation of superior CoT prompts.
arXiv Detail & Related papers (2024-03-21T11:34:26Z) - SEER: Facilitating Structured Reasoning and Explanation via Reinforcement Learning [29.514755268807868]
We propose SEER, a novel method that maximizes a structure-based return to facilitate structured reasoning and explanation.
Our proposed structure-based return precisely describes the hierarchical and branching structure inherent in structured reasoning.
Our experiments show that SEER significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-24T06:10:51Z) - Chain-of-Table: Evolving Tables in the Reasoning Chain for Table
Understanding [79.9461269253121]
We propose the Chain-of-Table framework, where tabular data is explicitly used in the reasoning chain as a proxy for intermediate thoughts.
Chain-of-Table achieves new state-of-the-art performance on WikiTQ, FeTaQA, and TabFact benchmarks.
arXiv Detail & Related papers (2024-01-09T07:46:26Z) - Prompting Large Language Models with Chain-of-Thought for Few-Shot
Knowledge Base Question Generation [19.327008532572645]
Question Generation over Knowledge Bases (KBQG) aims to convert a logical form into a natural language question.
We propose Chain-of-Thought prompting, which is an in-context learning strategy for reasoning.
We conduct extensive experiments over three public KBQG datasets.
arXiv Detail & Related papers (2023-10-12T15:08:14Z) - Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with
Large Language Models [68.05046964022844]
Large language models (LLMs) have unveiled remarkable reasoning capabilities by exploiting chain-of-thought (CoT) prompting.
We propose GeM-CoT, a Generalizable CoT prompting mechanism in Mixed-task scenarios where the type of input questions is unknown.
With this technical design, GeM-CoT simultaneously enjoys superior generalization capabilities and remarkable performances on 10 public reasoning tasks and 23 BBH tasks.
arXiv Detail & Related papers (2023-10-10T15:10:03Z) - Open-domain Question Answering via Chain of Reasoning over Heterogeneous
Knowledge [82.5582220249183]
We propose a novel open-domain question answering (ODQA) framework for answering single/multi-hop questions across heterogeneous knowledge sources.
Unlike previous methods that solely rely on the retriever for gathering all evidence in isolation, our intermediary performs a chain of reasoning over the retrieved set.
Our system achieves competitive performance on two ODQA datasets, OTT-QA and NQ, against tables and passages from Wikipedia.
arXiv Detail & Related papers (2022-10-22T03:21:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.