XATU: A Fine-grained Instruction-based Benchmark for Explainable Text Updates
- URL: http://arxiv.org/abs/2309.11063v2
- Date: Thu, 14 Mar 2024 22:23:14 GMT
- Title: XATU: A Fine-grained Instruction-based Benchmark for Explainable Text Updates
- Authors: Haopeng Zhang, Hayate Iso, Sairam Gurajada, Nikita Bhutani,
- Abstract summary: This paper introduces XATU, the first benchmark specifically designed for fine-grained instruction-based explainable text editing.
XATU considers finer-grained text editing tasks of varying difficulty, incorporating lexical, syntactic, semantic, and knowledge-intensive edit aspects.
We demonstrate the effectiveness of instruction tuning and the impact of underlying architecture across various editing tasks.
- Score: 7.660511135287692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text editing is a crucial task of modifying text to better align with user intents. However, existing text editing benchmark datasets contain only coarse-grained instructions and lack explainability, thus resulting in outputs that deviate from the intended changes outlined in the gold reference. To comprehensively investigate the text editing capabilities of large language models (LLMs), this paper introduces XATU, the first benchmark specifically designed for fine-grained instruction-based explainable text editing. XATU considers finer-grained text editing tasks of varying difficulty (simplification, grammar check, fact-check, etc.), incorporating lexical, syntactic, semantic, and knowledge-intensive edit aspects. To enhance interpretability, we combine LLM-based annotation and human annotation, resulting in a benchmark that includes fine-grained instructions and gold-standard edit explanations. By evaluating existing LLMs against our benchmark, we demonstrate the effectiveness of instruction tuning and the impact of underlying architecture across various editing tasks. Furthermore, extensive experimentation reveals the significant role of explanations in fine-tuning language models for text editing tasks. The benchmark will be open-sourced to support reproduction and facilitate future research at~\url{https://github.com/megagonlabs/xatu}.
Related papers
- InstructEdit: Instruction-based Knowledge Editing for Large Language Models [39.2147118489123]
We develop an instruction-based editing technique, termed InstructEdit, which facilitates the editor's adaptation to various task performances simultaneously using simple instructions.
Experiments involving holdout unseen task illustrate that InstructEdit consistently surpass previous strong baselines.
arXiv Detail & Related papers (2024-02-25T15:46:33Z) - WikiIns: A High-Quality Dataset for Controlled Text Editing by Natural
Language Instruction [56.196512595940334]
We build and release WikiIns, a high-quality controlled text editing dataset with improved informativeness.
With the high-quality annotated dataset, we propose automatic approaches to generate a large-scale silver'' training set.
arXiv Detail & Related papers (2023-10-08T04:46:39Z) - Beyond the Chat: Executable and Verifiable Text-Editing with LLMs [87.84199761550634]
Conversational interfaces powered by Large Language Models (LLMs) have recently become a popular way to obtain feedback during document editing.
We present InkSync, an editing interface that suggests executable edits directly within the document being edited.
arXiv Detail & Related papers (2023-09-27T00:56:17Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - CoEdIT: Text Editing by Task-Specific Instruction Tuning [18.824571167583432]
CoEdIT is a state-of-the-art text editing system for writing assistance.
It takes instructions from the user specifying the attributes of the desired text, and outputs the edited text.
We present a large language model fine-tuned on a diverse collection of task-specific instructions for text editing.
arXiv Detail & Related papers (2023-05-17T00:05:24Z) - Improving Iterative Text Revision by Learning Where to Edit from Other
Revision Tasks [11.495407637511878]
Iterative text revision improves text quality by fixing grammatical errors, rephrasing for better readability or contextual appropriateness, or reorganizing sentence structures throughout a document.
Most recent research has focused on understanding and classifying different types of edits in the iterative revision process from human-written text.
We aim to build an end-to-end text revision system that can iteratively generate helpful edits by explicitly detecting editable spans with their corresponding edit intents.
arXiv Detail & Related papers (2022-12-02T18:10:43Z) - EditEval: An Instruction-Based Benchmark for Text Improvements [73.5918084416016]
This work presents EditEval: An instruction-based, benchmark and evaluation suite for automatic evaluation of editing capabilities.
We evaluate several pre-trained models, which shows that InstructGPT and PEER perform the best, but that most baselines fall below the supervised SOTA.
Our analysis shows that commonly used metrics for editing tasks do not always correlate well, and that optimization for prompts with the highest performance does not necessarily entail the strongest robustness to different models.
arXiv Detail & Related papers (2022-09-27T12:26:05Z) - Text Revision by On-the-Fly Representation Optimization [76.11035270753757]
Current state-of-the-art methods formulate these tasks as sequence-to-sequence learning problems.
We present an iterative in-place editing approach for text revision, which requires no parallel data.
It achieves competitive and even better performance than state-of-the-art supervised methods on text simplification.
arXiv Detail & Related papers (2022-04-15T07:38:08Z) - Text Editing by Command [82.50904226312451]
A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step.
We address this limitation with an interactive text generation setting in which the user interacts with the system by issuing commands to edit existing text.
We show that our Interactive Editor, a transformer-based model trained on this dataset, outperforms baselines and obtains positive results in both automatic and human evaluations.
arXiv Detail & Related papers (2020-10-24T08:00:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.