PPT: A Process-based Preference Learning Framework for Self Improving Table Question Answering Models
- URL: http://arxiv.org/abs/2505.17565v1
- Date: Fri, 23 May 2025 07:24:53 GMT
- Title: PPT: A Process-based Preference Learning Framework for Self Improving Table Question Answering Models
- Authors: Wei Zhou, Mohsen Mesgar, Heike Adel, Annemarie Friedrich,
- Abstract summary: We propose Process-based Preference learning framework for table question answering.<n>It decomposes reasoning chains into discrete states, assigns scores to each state, and samples contrastive steps for preference learning.
- Score: 16.790216473975146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Improving large language models (LLMs) with self-generated data has demonstrated success in tasks such as mathematical reasoning and code generation. Yet, no exploration has been made on table question answering (TQA), where a system answers questions based on tabular data. Addressing this gap is crucial for TQA, as effective self-improvement can boost performance without requiring costly or manually annotated data. In this work, we propose PPT, a Process-based Preference learning framework for TQA. It decomposes reasoning chains into discrete states, assigns scores to each state, and samples contrastive steps for preference learning. Experimental results show that PPT effectively improves TQA models by up to 5% on in-domain datasets and 2.4% on out-of-domain datasets, with only 8,000 preference pairs. Furthermore, the resulting models achieve competitive results compared to more complex and larger state-of-the-art TQA systems, while being five times more efficient during inference.
Related papers
- On Finetuning Tabular Foundation Models [29.76586200178702]
TabPFNv2 claims superior performance over traditional GBDT-based methods on small-scale datasets.<n>We evaluate various finetuning strategies for TabPFNv2 on diverse datasets.<n>We reveal that the success of finetuning stems from the fact that after gradient-based adaptation, the dot products of the query-representations of test objects more accurately reflect their target similarity.
arXiv Detail & Related papers (2025-06-10T16:52:31Z) - Reasoning-Table: Exploring Reinforcement Learning for Table Reasoning [24.624844234355734]
Reasoning-Table is the first application of reinforcement learning (RL) to table reasoning, achieving state-of-the-art performance.<n> Reasoning-Table emerges as a robust table reasoning large language model, surpassing larger proprietary models like Claude-3.7-Sonnet by 4.0%.
arXiv Detail & Related papers (2025-06-02T14:18:09Z) - T-SHIRT: Token-Selective Hierarchical Data Selection for Instruction Tuning [5.963754140027611]
Token-Selective HIeRarchical Data Selection for Instruction Tuning (T-SHIRT) is a novel data selection framework.<n>We demonstrate that models instruction-tuned on a curated dataset can outperform those trained on the entire large-scale dataset.
arXiv Detail & Related papers (2025-06-02T04:59:17Z) - CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis [31.953858122298517]
We propose a novel inference scaling strategy, CoT-based Synthesizer.<n>It synthesizes superior answers by analyzing complementary information from multiple candidate responses.<n>We show that our method significantly enhances performance, with gains of 11.8% for Llama3-8B and 10.3% for GPT-4o.
arXiv Detail & Related papers (2025-01-03T06:50:06Z) - Question: How do Large Language Models perform on the Question Answering tasks? Answer: [0.0]
Large Language Models (LLMs) have been showing promising results for various NLP-tasks without the explicit need to be trained for these tasks by using few-shot or zero-shot prompting techniques.<n>We propose a comprehensive performance comparison between smaller fine-tuned models and out-of-the-box instruction-following LLMs on the Stanford Question Answering dataset 2.0 (SQuAD2)<n>Our results show that smaller, fine-tuned models outperform current State-Of-The-Art (SOTA) LLMs on the fine-tuned task, but recent SOTA models are able to close this gap on the out
arXiv Detail & Related papers (2024-12-17T13:19:38Z) - MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale [66.73529246309033]
multimodal large language models (MLLMs) have shown significant potential in a broad range of multimodal tasks.<n>Existing instruction-tuning datasets only provide phrase-level answers without any intermediate rationales.<n>We introduce a scalable and cost-effective method to construct a large-scale multimodal instruction-tuning dataset with rich intermediate rationales.
arXiv Detail & Related papers (2024-12-06T18:14:24Z) - Structured List-Grounded Question Answering [11.109829342410265]
Document-grounded dialogue systems aim to answer user queries by leveraging external information.
Previous studies have mainly focused on handling free-form documents, often overlooking structured data such as lists.
This paper aims to enhance question answering systems for better interpretation and use of structured lists.
arXiv Detail & Related papers (2024-10-04T22:21:43Z) - Efficient Grammatical Error Correction Via Multi-Task Training and
Optimized Training Schedule [55.08778142798106]
We propose auxiliary tasks that exploit the alignment between the original and corrected sentences.
We formulate each task as a sequence-to-sequence problem and perform multi-task training.
We find that the order of datasets used for training and even individual instances within a dataset may have important effects on the final performance.
arXiv Detail & Related papers (2023-11-20T14:50:12Z) - MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering [64.6741991162092]
We present MinPrompt, a minimal data augmentation framework for open-domain question answering.
We transform the raw text into a graph structure to build connections between different factual sentences.
We then apply graph algorithms to identify the minimal set of sentences needed to cover the most information in the raw text.
We generate QA pairs based on the identified sentence subset and train the model on the selected sentences to obtain the final model.
arXiv Detail & Related papers (2023-10-08T04:44:36Z) - Parameter-Efficient Abstractive Question Answering over Tables or Text [60.86457030988444]
A long-term ambition of information seeking QA systems is to reason over multi-modal contexts and generate natural answers to user queries.
Memory intensive pre-trained language models are adapted to downstream tasks such as QA by fine-tuning the model on QA data in a specific modality like unstructured text or structured tables.
To avoid training such memory-hungry models while utilizing a uniform architecture for each modality, parameter-efficient adapters add and train small task-specific bottle-neck layers between transformer layers.
arXiv Detail & Related papers (2022-04-07T10:56:29Z) - Learning to Perturb Word Embeddings for Out-of-distribution QA [55.103586220757464]
We propose a simple yet effective DA method based on a noise generator, which learns to perturb the word embedding of the input questions and context without changing their semantics.
We validate the performance of the QA models trained with our word embedding on a single source dataset, on five different target domains.
Notably, the model trained with ours outperforms the model trained with more than 240K artificially generated QA pairs.
arXiv Detail & Related papers (2021-05-06T14:12:26Z) - Logic-Guided Data Augmentation and Regularization for Consistent
Question Answering [55.05667583529711]
This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions.
Our method leverages logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model.
arXiv Detail & Related papers (2020-04-21T17:03:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.