EASE: An Easily-Customized Annotation System Powered by Efficiency
Enhancement Mechanisms
- URL: http://arxiv.org/abs/2305.14169v1
- Date: Tue, 23 May 2023 15:38:37 GMT
- Title: EASE: An Easily-Customized Annotation System Powered by Efficiency
Enhancement Mechanisms
- Authors: Naihao Deng, Yikai Liu, Mingye Chen, Winston Wu, Siyang Liu, Yulong
Chen, Yue Zhang, Rada Mihalcea
- Abstract summary: EASE is an easily- customized system powered by efficiency enhancement mechanisms.
sysname provides modular annotation units for building customized interfaces.
Our results show that our system can meet the diverse needs of NLP researchers.
- Score: 30.01064463095968
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The performance of current supervised AI systems is tightly connected to the
availability of annotated datasets. Annotations are usually collected through
annotation tools, which are often designed for specific tasks and are difficult
to customize. Moreover, existing annotation tools with an active learning
mechanism often only support limited use cases. To address these limitations,
we present EASE, an Easily-Customized Annotation System Powered by Efficiency
Enhancement Mechanisms. \sysname provides modular annotation units for building
customized annotation interfaces and also provides multiple back-end options
that suggest annotations using (1) multi-task active learning; (2) demographic
feature based active learning; (3) a prompt system that can query the API of
large language models. We conduct multiple experiments and user studies to
evaluate our system's flexibility and effectiveness. Our results show that our
system can meet the diverse needs of NLP researchers and significantly
accelerate the annotation process.
Related papers
- DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning [75.68193159293425]
In-context learning (ICL) allows transformer-based language models to learn a specific task with a few "task demonstrations" without updating their parameters.
We propose an influence function-based attribution technique, DETAIL, that addresses the specific characteristics of ICL.
We experimentally prove the wide applicability of DETAIL by showing our attribution scores obtained on white-box models are transferable to black-box models in improving model performance.
arXiv Detail & Related papers (2024-05-22T15:52:52Z) - Third-Party Language Model Performance Prediction from Instruction [59.574169249307054]
Language model-based instruction-following systems have lately shown increasing performance on many benchmark tasks.
A user may easily prompt a model with an instruction without any idea of whether the responses should be expected to be accurate.
We propose a third party performance prediction framework, where a separate model is trained to predict the metric resulting from evaluating an instruction-following system on a task.
arXiv Detail & Related papers (2024-03-19T03:53:47Z) - Large Language User Interfaces: Voice Interactive User Interfaces powered by LLMs [5.06113628525842]
We present a framework that can serve as an intermediary between a user and their user interface (UI)
We employ a system that stands upon textual semantic mappings of UI components, in the form of annotations.
Our engine can classify the most appropriate application, extract relevant parameters, and subsequently execute precise predictions of the user's expected actions.
arXiv Detail & Related papers (2024-02-07T21:08:49Z) - Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - The Weak Supervision Landscape [5.186945902380689]
We propose a framework for categorising weak supervision settings.
We identify the key elements that characterise weak supervision and devise a series of dimensions that categorise most of the existing approaches.
We show how common settings in the literature fit within the framework and discuss its possible uses in practice.
arXiv Detail & Related papers (2022-03-30T13:19:43Z) - Modular approach to data preprocessing in ALOHA and application to a
smart industry use case [0.0]
The paper addresses a modular approach, integrated into the ALOHA tool flow, to support the data preprocessing and transformation pipeline.
To demonstrate the effectiveness of the approach, we present some experimental results related to a keyword spotting use case.
arXiv Detail & Related papers (2021-02-02T06:48:51Z) - HUMAN: Hierarchical Universal Modular ANnotator [14.671297336775387]
We introduce a novel web-based annotation tool that addresses the above problems by a) covering a variety of annotation tasks on both textual and image data, and b) the usage of an internal deterministic state machine.
Humane comes with an easy-to-use graphical user interface that simplifies the annotation task and management.
arXiv Detail & Related papers (2020-10-02T16:20:30Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z) - A Unified Object Motion and Affinity Model for Online Multi-Object
Tracking [127.5229859255719]
We propose a novel MOT framework that unifies object motion and affinity model into a single network, named UMA.
UMA integrates single object tracking and metric learning into a unified triplet network by means of multi-task learning.
We equip our model with a task-specific attention module, which is used to boost task-aware feature learning.
arXiv Detail & Related papers (2020-03-25T09:36:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.