[Lions: 1] and [Tigers: 2] and [Bears: 3], Oh My! Literary Coreference
Annotation with LLMs
- URL: http://arxiv.org/abs/2401.17922v1
- Date: Wed, 31 Jan 2024 15:35:21 GMT
- Title: [Lions: 1] and [Tigers: 2] and [Bears: 3], Oh My! Literary Coreference
Annotation with LLMs
- Authors: Rebecca M. M. Hicke and David Mimno
- Abstract summary: We develop new language-model-based seq2seq systems for literary studies.
We create, evaluate, and release several trained models for coreference.
- Score: 4.2243058640527575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Coreference annotation and resolution is a vital component of computational
literary studies. However, it has previously been difficult to build high
quality systems for fiction. Coreference requires complicated structured
outputs, and literary text involves subtle inferences and highly varied
language. New language-model-based seq2seq systems present the opportunity to
solve both these problems by learning to directly generate a copy of an input
sentence with markdown-like annotations. We create, evaluate, and release
several trained models for coreference, as well as a workflow for training new
models.
Related papers
- Towards Chapter-to-Chapter Context-Aware Literary Translation via Large Language Models [16.96647110733261]
discourse phenomena in existing document-level translation datasets are sparse.
Most existing document-level corpora and context-aware machine translation methods rely on an unrealistic assumption on sentence-level alignments.
We propose a more pragmatic and challenging setting for context-aware translation, termed chapter-to-chapter (Ch2Ch) translation.
arXiv Detail & Related papers (2024-07-12T04:18:22Z) - Learning to Refine with Fine-Grained Natural Language Feedback [81.70313509881315]
We propose looking at refinement with feedback as a composition of three distinct LLM competencies.
A key property of this approach is that the step 2 critique model can give fine-grained feedback about errors.
We show that models of different capabilities benefit from refining with this approach on the task of improving factual consistency of document grounded summaries.
arXiv Detail & Related papers (2024-07-02T16:15:01Z) - Shortcomings of LLMs for Low-Resource Translation: Retrieval and Understanding are Both the Problem [4.830018386227]
This work investigates the in-context learning abilities of pretrained large language models (LLMs) when instructed to translate text from a low-resource language into a high-resource language as part of an automated machine translation pipeline.
We conduct a set of experiments translating Southern Quechua to Spanish and examine the informativity of various types of information retrieved from a constrained database of digitized pedagogical materials (dictionaries and grammar lessons) and parallel corpora.
arXiv Detail & Related papers (2024-06-21T20:02:22Z) - Exploring Precision and Recall to assess the quality and diversity of LLMs [82.21278402856079]
We introduce a novel evaluation framework for Large Language Models (LLMs) such as textscLlama-2 and textscMistral.
This approach allows for a nuanced assessment of the quality and diversity of generated text without the need for aligned corpora.
arXiv Detail & Related papers (2024-02-16T13:53:26Z) - Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution [48.86322922826514]
This paper defines a new task of Knowledge-aware Language Model Attribution (KaLMA)
First, we extend attribution source from unstructured texts to Knowledge Graph (KG), whose rich structures benefit both the attribution performance and working scenarios.
Second, we propose a new Conscious Incompetence" setting considering the incomplete knowledge repository.
Third, we propose a comprehensive automatic evaluation metric encompassing text quality, citation quality, and text citation alignment.
arXiv Detail & Related papers (2023-10-09T11:45:59Z) - Towards LLM-guided Causal Explainability for Black-box Text Classifiers [16.36602400590088]
We aim to leverage the instruction-following and textual understanding capabilities of recent Large Language Models to facilitate causal explainability.
We propose a three-step pipeline via which, we use an off-the-shelf LLM to identify the latent or unobserved features in the input text.
We experiment with our pipeline on multiple NLP text classification datasets, and present interesting and promising findings.
arXiv Detail & Related papers (2023-09-23T11:22:28Z) - Improving Automatic Quotation Attribution in Literary Novels [21.164701493247794]
Current models for quotation attribution in literary novels assume varying levels of available information in their training and test data.
We benchmark state-of-the-art models on each of these sub-tasks independently, using a large dataset of annotated coreferences and quotations in literary novels.
We also train and evaluate models for the speaker attribution task in particular, showing that a simple sequential prediction model achieves accuracy scores on par with state-of-the-art models.
arXiv Detail & Related papers (2023-07-07T17:37:01Z) - Dual-Alignment Pre-training for Cross-lingual Sentence Embedding [79.98111074307657]
We propose a dual-alignment pre-training (DAP) framework for cross-lingual sentence embedding.
We introduce a novel representation translation learning (RTL) task, where the model learns to use one-side contextualized token representation to reconstruct its translation counterpart.
Our approach can significantly improve sentence embedding.
arXiv Detail & Related papers (2023-05-16T03:53:30Z) - Annotation Curricula to Implicitly Train Non-Expert Annotators [56.67768938052715]
voluntary studies often require annotators to familiarize themselves with the task, its annotation scheme, and the data domain.
This can be overwhelming in the beginning, mentally taxing, and induce errors into the resulting annotations.
We propose annotation curricula, a novel approach to implicitly train annotators.
arXiv Detail & Related papers (2021-06-04T09:48:28Z) - A Simple Joint Model for Improved Contextual Neural Lemmatization [60.802451210656805]
We present a simple joint neural model for lemmatization and morphological tagging that achieves state-of-the-art results on 20 languages.
Our paper describes the model in addition to training and decoding procedures.
arXiv Detail & Related papers (2019-04-04T02:03:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.