CoEdIT: Text Editing by Task-Specific Instruction Tuning
- URL: http://arxiv.org/abs/2305.09857v2
- Date: Mon, 23 Oct 2023 23:17:13 GMT
- Title: CoEdIT: Text Editing by Task-Specific Instruction Tuning
- Authors: Vipul Raheja, Dhruv Kumar, Ryan Koo, Dongyeop Kang
- Abstract summary: CoEdIT is a state-of-the-art text editing system for writing assistance.
It takes instructions from the user specifying the attributes of the desired text, and outputs the edited text.
We present a large language model fine-tuned on a diverse collection of task-specific instructions for text editing.
- Score: 18.824571167583432
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce CoEdIT, a state-of-the-art text editing system for writing
assistance. CoEdIT takes instructions from the user specifying the attributes
of the desired text, such as "Make the sentence simpler" or "Write it in a more
neutral style," and outputs the edited text. We present a large language model
fine-tuned on a diverse collection of task-specific instructions for text
editing (a total of 82K instructions). Our model (1) achieves state-of-the-art
performance on various text editing benchmarks, (2) is competitive with
publicly available largest-sized LLMs trained on instructions while being
nearly 60x smaller, (3) is capable of generalizing to unseen edit instructions,
and (4) exhibits abilities to generalize to composite instructions containing
different combinations of edit actions. Through extensive qualitative and
quantitative analysis, we show that writers prefer the edits suggested by
CoEdIT relative to other state-of-the-art text editing models. Our code, data,
and models are publicly available at https://github.com/vipulraheja/coedit.
Related papers
- DocEdit-v2: Document Structure Editing Via Multimodal LLM Grounding [128.92659116774374]
We introduce DocEdit-v2, a novel framework that performs end-to-end document editing by leveraging Large Multimodal Models (LMMs)
It consists of three novel components: (1) Doc2Command, which simultaneously localizes edit regions of interest (RoI) and disambiguates user edit requests into edit commands; (2) LLM-based Command Reformulation prompting to tailor edit commands originally intended for specialized software into edit instructions suitable for generalist LMMs; and (3) Moreover, DocEdit-v2 processes these outputs via Large Multimodal Models like GPT-4V and Gemini, to parse the document layout, execute edits on
arXiv Detail & Related papers (2024-10-21T19:59:04Z) - mEdIT: Multilingual Text Editing via Instruction Tuning [8.354138611160117]
mEdIT is a state-of-the-art text editing models for writing assistance.
We build mEdIT by curating data from multiple publicly available human-annotated text editing datasets.
We show that mEdIT generalizes effectively to new languages over multilingual baselines.
arXiv Detail & Related papers (2024-02-26T10:33:36Z) - InstructEdit: Instruction-based Knowledge Editing for Large Language Models [39.2147118489123]
We develop an instruction-based editing technique, termed InstructEdit, which facilitates the editor's adaptation to various task performances simultaneously using simple instructions.
Experiments involving holdout unseen task illustrate that InstructEdit consistently surpass previous strong baselines.
arXiv Detail & Related papers (2024-02-25T15:46:33Z) - SmartEdit: Exploring Complex Instruction-based Image Editing with
Multimodal Large Language Models [91.22477798288003]
This paper introduces SmartEdit, a novel approach to instruction-based image editing.
It exploits Multimodal Large Language Models (MLLMs) to enhance their understanding and reasoning capabilities.
We show that a small amount of complex instruction editing data can effectively stimulate SmartEdit's editing capabilities for more complex instructions.
arXiv Detail & Related papers (2023-12-11T17:54:11Z) - Optimisation-Based Multi-Modal Semantic Image Editing [58.496064583110694]
We propose an inference-time editing optimisation to accommodate multiple editing instruction types.
By allowing to adjust the influence of each loss function, we build a flexible editing solution that can be adjusted to user preferences.
We evaluate our method using text, pose and scribble edit conditions, and highlight our ability to achieve complex edits.
arXiv Detail & Related papers (2023-11-28T15:31:11Z) - Emu Edit: Precise Image Editing via Recognition and Generation Tasks [62.95717180730946]
We present Emu Edit, a multi-task image editing model which sets state-of-the-art results in instruction-based image editing.
We train it to multi-task across an unprecedented range of tasks, such as region-based editing, free-form editing, and Computer Vision tasks.
We show that Emu Edit can generalize to new tasks, such as image inpainting, super-resolution, and compositions of editing tasks, with just a few labeled examples.
arXiv Detail & Related papers (2023-11-16T18:55:58Z) - XATU: A Fine-grained Instruction-based Benchmark for Explainable Text Updates [7.660511135287692]
This paper introduces XATU, the first benchmark specifically designed for fine-grained instruction-based explainable text editing.
XATU considers finer-grained text editing tasks of varying difficulty, incorporating lexical, syntactic, semantic, and knowledge-intensive edit aspects.
We demonstrate the effectiveness of instruction tuning and the impact of underlying architecture across various editing tasks.
arXiv Detail & Related papers (2023-09-20T04:58:59Z) - Visual Instruction Inversion: Image Editing via Visual Prompting [34.96778567507126]
We present a method for image editing via visual prompting.
We leverage the rich, pretrained editing capabilities of text-to-image diffusion models by inverting visual prompts into editing instructions.
arXiv Detail & Related papers (2023-07-26T17:50:10Z) - SWiPE: A Dataset for Document-Level Simplification of Wikipedia Pages [87.08880616654258]
We introduce the SWiPE dataset, which reconstructs the document-level editing process from English Wikipedia (EW) articles to paired Simple Wikipedia (SEW) articles.
We work with Wikipedia editors to annotate 5,000 EW-SEW document pairs, labeling more than 40,000 edits with proposed 19 categories.
We find that SWiPE-trained models generate more complex edits while reducing unwanted edits.
arXiv Detail & Related papers (2023-05-30T16:52:42Z) - Improving Iterative Text Revision by Learning Where to Edit from Other
Revision Tasks [11.495407637511878]
Iterative text revision improves text quality by fixing grammatical errors, rephrasing for better readability or contextual appropriateness, or reorganizing sentence structures throughout a document.
Most recent research has focused on understanding and classifying different types of edits in the iterative revision process from human-written text.
We aim to build an end-to-end text revision system that can iteratively generate helpful edits by explicitly detecting editable spans with their corresponding edit intents.
arXiv Detail & Related papers (2022-12-02T18:10:43Z) - EditEval: An Instruction-Based Benchmark for Text Improvements [73.5918084416016]
This work presents EditEval: An instruction-based, benchmark and evaluation suite for automatic evaluation of editing capabilities.
We evaluate several pre-trained models, which shows that InstructGPT and PEER perform the best, but that most baselines fall below the supervised SOTA.
Our analysis shows that commonly used metrics for editing tasks do not always correlate well, and that optimization for prompts with the highest performance does not necessarily entail the strongest robustness to different models.
arXiv Detail & Related papers (2022-09-27T12:26:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.