HUMAN: Hierarchical Universal Modular ANnotator
- URL: http://arxiv.org/abs/2010.01080v1
- Date: Fri, 2 Oct 2020 16:20:30 GMT
- Title: HUMAN: Hierarchical Universal Modular ANnotator
- Authors: Moritz Wolf, Dana Ruiter, Ashwin Geet D'Sa, Liane Reiners, Jan
Alexandersson, Dietrich Klakow
- Abstract summary: We introduce a novel web-based annotation tool that addresses the above problems by a) covering a variety of annotation tasks on both textual and image data, and b) the usage of an internal deterministic state machine.
Humane comes with an easy-to-use graphical user interface that simplifies the annotation task and management.
- Score: 14.671297336775387
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A lot of real-world phenomena are complex and cannot be captured by single
task annotations. This causes a need for subsequent annotations, with
interdependent questions and answers describing the nature of the subject at
hand. Even in the case a phenomenon is easily captured by a single task, the
high specialisation of most annotation tools can result in having to switch to
another tool if the task only slightly changes.
We introduce HUMAN, a novel web-based annotation tool that addresses the
above problems by a) covering a variety of annotation tasks on both textual and
image data, and b) the usage of an internal deterministic state machine,
allowing the researcher to chain different annotation tasks in an
interdependent manner. Further, the modular nature of the tool makes it easy to
define new annotation tasks and integrate machine learning algorithms e.g., for
active learning. HUMAN comes with an easy-to-use graphical user interface that
simplifies the annotation task and management.
Related papers
- MetaTool: Facilitating Large Language Models to Master Tools with Meta-task Augmentation [25.360660222418183]
We present MetaTool, a novel tool learning methodology designed to generalize across any reusable toolset.
By incorporating meta-task data into task-oriented training, our method significantly enhances the performance of open-source Large Language Models.
arXiv Detail & Related papers (2024-07-15T10:15:41Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Antarlekhaka: A Comprehensive Tool for Multi-task Natural Language
Annotation [0.0]
Antarlekhaka is a tool for manual annotation of a comprehensive set of tasks relevant to Natural Language Processing.
The tool is Unicode-compatible, language-agnostic, Web-deployable and supports distributed annotation by multiple simultaneous annotators.
It has been used for two real-life annotation tasks on two different languages, namely, Sanskrit and Bengali.
arXiv Detail & Related papers (2023-10-11T19:09:07Z) - EASE: An Easily-Customized Annotation System Powered by Efficiency
Enhancement Mechanisms [30.01064463095968]
EASE is an easily- customized system powered by efficiency enhancement mechanisms.
sysname provides modular annotation units for building customized interfaces.
Our results show that our system can meet the diverse needs of NLP researchers.
arXiv Detail & Related papers (2023-05-23T15:38:37Z) - Universal Instance Perception as Object Discovery and Retrieval [90.96031157557806]
UNI reformulates diverse instance perception tasks into a unified object discovery and retrieval paradigm.
It can flexibly perceive different types of objects by simply changing the input prompts.
UNI shows superior performance on 20 challenging benchmarks from 10 instance-level tasks.
arXiv Detail & Related papers (2023-03-12T14:28:24Z) - PartAL: Efficient Partial Active Learning in Multi-Task Visual Settings [57.08386016411536]
We show that it is more effective to select not only the images to be annotated but also a subset of tasks for which to provide annotations at each Active Learning (AL)
We demonstrate the effectiveness of our approach on several popular multi-task datasets.
arXiv Detail & Related papers (2022-11-21T15:08:35Z) - Improving Task Generalization via Unified Schema Prompt [87.31158568180514]
Unified Prompt is a flexible and prompting method, which automatically customizes the learnable prompts for each task according to the task input schema.
It models the shared knowledge between tasks, while keeping the characteristics of different task schema.
The framework achieves strong zero-shot and few-shot performance on 16 unseen tasks downstream from 8 task types.
arXiv Detail & Related papers (2022-08-05T15:26:36Z) - Improving Multi-task Generalization Ability for Neural Text Matching via
Prompt Learning [54.66399120084227]
Recent state-of-the-art neural text matching models (PLMs) are hard to generalize to different tasks.
We adopt a specialization-generalization training strategy and refer to it as Match-Prompt.
In specialization stage, descriptions of different matching tasks are mapped to only a few prompt tokens.
In generalization stage, text matching model explores the essential matching signals by being trained on diverse multiple matching tasks.
arXiv Detail & Related papers (2022-04-06T11:01:08Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.