CARE: Collaborative AI-Assisted Reading Environment
- URL: http://arxiv.org/abs/2302.12611v1
- Date: Fri, 24 Feb 2023 12:55:31 GMT
- Title: CARE: Collaborative AI-Assisted Reading Environment
- Authors: Dennis Zyska, Nils Dycke, Jan Buchmann, Ilia Kuznetsov, Iryna Gurevych
- Abstract summary: We present CARE: the first open integrated platform for the study of inline commentary and reading.
CARE facilitates data collection for inline commentaries in a commonplace collaborative reading environment.
CARE provides a framework for enhancing reading with NLP-based assistance, such as text classification, generation or question answering.
- Score: 47.824020656329246
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent years have seen impressive progress in AI-assisted writing, yet the
developments in AI-assisted reading are lacking. We propose inline commentary
as a natural vehicle for AI-based reading assistance, and present CARE: the
first open integrated platform for the study of inline commentary and reading.
CARE facilitates data collection for inline commentaries in a commonplace
collaborative reading environment, and provides a framework for enhancing
reading with NLP-based assistance, such as text classification, generation or
question answering. The extensible behavioral logging allows unique insights
into the reading and commenting behavior, and flexible configuration makes the
platform easy to deploy in new scenarios. To evaluate CARE in action, we apply
the platform in a user study dedicated to scholarly peer review. CARE
facilitates the data collection and study of inline commentary in NLP,
extrinsic evaluation of NLP assistance, and application prototyping. We invite
the community to explore and build upon the open source implementation of CARE.
Related papers
- What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - GAIA Search: Hugging Face and Pyserini Interoperability for NLP Training
Data Exploration [97.68234051078997]
We discuss how Pyserini can be integrated with the Hugging Face ecosystem of open-source AI libraries and artifacts.
We include a Jupyter Notebook-based walk through the core interoperability features, available on GitHub.
We present GAIA Search - a search engine built following previously laid out principles, giving access to four popular large-scale text collections.
arXiv Detail & Related papers (2023-06-02T12:09:59Z) - NLPeer: A Unified Resource for the Computational Study of Peer Review [58.71736531356398]
We introduce NLPeer -- the first ethically sourced multidomain corpus of more than 5k papers and 11k review reports from five different venues.
We augment previous peer review datasets to include parsed and structured paper representations, rich metadata and versioning information.
Our work paves the path towards systematic, multi-faceted, evidence-based study of peer review in NLP and beyond.
arXiv Detail & Related papers (2022-11-12T12:29:38Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - Hone as You Read: A Practical Type of Interactive Summarization [6.662800021628275]
We present HARE, a new task where reader feedback is used to optimize document summaries for personal interest.
This task is related to interactive summarization, where personalized summaries are produced following a long feedback stage.
We propose to gather minimally-invasive feedback during the reading process to adapt to user interests and augment the document in real-time.
arXiv Detail & Related papers (2021-05-06T19:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.