NAPG: Non-Autoregressive Program Generation for Hybrid Tabular-Textual
Question Answering
- URL: http://arxiv.org/abs/2211.03462v2
- Date: Fri, 13 Oct 2023 13:20:51 GMT
- Title: NAPG: Non-Autoregressive Program Generation for Hybrid Tabular-Textual
Question Answering
- Authors: Tengxun Zhang, Hongfei Xu, Josef van Genabith, Deyi Xiong, Hongying
Zan
- Abstract summary: Current numerical reasoning methods autoregressively decode program sequences.
The accuracy of program generation drops sharply as the decoding steps unfold due to error propagation.
In this paper, we propose a non-autoregressive program generation framework.
- Score: 52.10214317661547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hybrid tabular-textual question answering (QA) requires reasoning from
heterogeneous information, and the types of reasoning are mainly divided into
numerical reasoning and span extraction. Current numerical reasoning methods
autoregressively decode program sequences, and each decoding step produces
either an operator or an operand. However, the step-by-step decoding suffers
from exposure bias, and the accuracy of program generation drops sharply as the
decoding steps unfold due to error propagation. In this paper, we propose a
non-autoregressive program generation framework, which independently generates
complete program tuples containing both operators and operands, can address the
error propagation issue while significantly boosting the speed of program
generation. Experiments on the ConvFinQA and MultiHiertt datasets show that our
non-autoregressive program generation method can bring about substantial
improvements over the strong FinQANet (+5.06 Exe Acc and +4.80 Prog Acc points)
and MT2Net (+7.97 EM and +6.38 F1 points) baselines, establishing the new
state-of-the-art performance, while being much faster (21x) in program
generation. Finally, with increasing numbers of numerical reasoning steps the
performance drop of our method is significantly smaller than that of the
baselines. Our code will be publicly available soon.
Related papers
- Learning to Reason via Program Generation, Emulation, and Search [33.11955431589091]
Program synthesis with language models (LMs) has unlocked a large set of reasoning abilities.
Not all reasoning tasks are easily expressible as code, e.g. tasks involving commonsense reasoning, moral decision-making, and sarcasm understanding.
We propose Code Generation and Emulated EXecution (CoGEX) to extend an LM's program synthesis skills to such tasks.
arXiv Detail & Related papers (2024-05-25T19:40:50Z) - Towards Neural Synthesis for SMT-Assisted Proof-Oriented Programming [8.34623776815378]
We curate a dataset of 600K lines of open-source F* programs and proofs.
This dataset includes software used in production systems ranging from Windows and Linux to Python and Firefox.
We investigate the use of AI to synthesize programs and their proofs in F*, with promising results.
arXiv Detail & Related papers (2024-05-03T00:14:33Z) - GEC-DePenD: Non-Autoregressive Grammatical Error Correction with
Decoupled Permutation and Decoding [52.14832976759585]
Grammatical error correction (GEC) is an important NLP task that is usually solved with autoregressive sequence-to-sequence models.
We propose a novel non-autoregressive approach to GEC that decouples the architecture into a permutation network.
We show that the resulting network improves over previously known non-autoregressive methods for GEC.
arXiv Detail & Related papers (2023-11-14T14:24:36Z) - Exploring Equation as a Better Intermediate Meaning Representation for
Numerical Reasoning [53.2491163874712]
We use equations as IMRs to solve the numerical reasoning task.
We present a method called Boosting Numerical Reasontextbfing by Decomposing the Generation of Equations (Bridge)
Our method improves the performance by 2.2%, 0.9%, and 1.7% on GSM8K, SVAMP, and Algebra datasets.
arXiv Detail & Related papers (2023-08-21T09:35:33Z) - Fault-Aware Neural Code Rankers [64.41888054066861]
We propose fault-aware neural code rankers that can predict the correctness of a sampled program without executing it.
Our fault-aware rankers can significantly increase the pass@1 accuracy of various code generation models.
arXiv Detail & Related papers (2022-06-04T22:01:05Z) - Leveraging Causal Inference for Explainable Automatic Program Repair [24.146216081282798]
This paper presents an interpretable approach for program repair based on sequence-to-sequence models with causal inference.
Our method is called CPR, short for causal program repair.
Experiments on four programming languages show that CPR can generate causal graphs for reasonable interpretations.
arXiv Detail & Related papers (2022-05-26T13:25:33Z) - Lossless Acceleration for Seq2seq Generation with Aggressive Decoding [74.12096349944497]
Aggressive Decoding is a novel decoding algorithm for seq2seq generation.
Our approach aims to yield identical (or better) generation compared with autoregressive decoding.
We test Aggressive Decoding on the most popular 6-layer Transformer model on GPU in multiple seq2seq tasks.
arXiv Detail & Related papers (2022-05-20T17:59:00Z) - Natural Language to Code Translation with Execution [82.52142893010563]
Execution result--minimum Bayes risk decoding for program selection.
We show that it improves the few-shot performance of pretrained code models on natural-language-to-code tasks.
arXiv Detail & Related papers (2022-04-25T06:06:08Z) - AutoPhase: Juggling HLS Phase Orderings in Random Forests with Deep
Reinforcement Learning [17.584552398664737]
AutoPhase is a framework that takes a program and uses deep reinforcement learning to find a sequence of compilation passes that minimizes its execution time.
We show that AutoPhase improves circuit performance by 28% when compared to using the -O3 compiler flag.
Unlike existing state-of-the-art solutions, our deep reinforcement learning solution shows promising result in generalizing to real benchmarks.
arXiv Detail & Related papers (2020-03-02T05:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.