Domain-Transferable Method for Named Entity Recognition Task
- URL: http://arxiv.org/abs/2011.12170v1
- Date: Tue, 24 Nov 2020 15:45:52 GMT
- Title: Domain-Transferable Method for Named Entity Recognition Task
- Authors: Vladislav Mikhailov and Tatiana Shavrina
- Abstract summary: This paper describes a method to learn a domain-specific NER model for an arbitrary set of named entities.
We assume that the supervision can be obtained with no human effort, and neural models can learn from each other.
- Score: 0.6040938686276304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Named Entity Recognition (NER) is a fundamental task in the fields of natural
language processing and information extraction. NER has been widely used as a
standalone tool or an essential component in a variety of applications such as
question answering, dialogue assistants and knowledge graphs development.
However, training reliable NER models requires a large amount of labelled data
which is expensive to obtain, particularly in specialized domains. This paper
describes a method to learn a domain-specific NER model for an arbitrary set of
named entities when domain-specific supervision is not available. We assume
that the supervision can be obtained with no human effort, and neural models
can learn from each other. The code, data and models are publicly available.
Related papers
- Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification [71.08024880298613]
We study the multi-source Domain Generalization of text classification.
We propose a framework to use multiple seen domains to train a model that can achieve high accuracy in an unseen domain.
arXiv Detail & Related papers (2024-09-20T07:46:21Z) - Coarse-to-fine Knowledge Graph Domain Adaptation based on
Distantly-supervised Iterative Training [12.62127290494378]
We propose an integrated framework for adapting and re-learning knowledge graphs.
No manual data annotation is required to train the model.
We introduce a novel iterative training strategy to facilitate the discovery of domain-specific named entities and triples.
arXiv Detail & Related papers (2022-11-05T08:16:38Z) - Simple Questions Generate Named Entity Recognition Datasets [18.743889213075274]
This work introduces an ask-to-generate approach, which automatically generates NER datasets by asking simple natural language questions.
Our models largely outperform previous weakly supervised models on six NER benchmarks across four different domains.
Formulating the needs of NER with natural language also allows us to build NER models for fine-grained entity types such as Award.
arXiv Detail & Related papers (2021-12-16T11:44:38Z) - Knowledge Graph Anchored Information-Extraction for Domain-Specific
Insights [1.6308268213252761]
We use a task-based approach for fulfilling specific information needs within a new domain.
A pipeline constructed of state of the art NLP technologies is used to automatically extract an instance level semantic structure.
arXiv Detail & Related papers (2021-04-18T19:28:10Z) - Streaming Self-Training via Domain-Agnostic Unlabeled Images [62.57647373581592]
We present streaming self-training (SST) that aims to democratize the process of learning visual recognition models.
Key to SST are two crucial observations: (1) domain-agnostic unlabeled images enable us to learn better models with a few labeled examples without any additional knowledge or supervision; and (2) learning is a continuous process and can be done by constructing a schedule of learning updates.
arXiv Detail & Related papers (2021-04-07T17:58:39Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog [70.79442700890843]
We propose a novel Dynamic Fusion Network (DF-Net) which automatically exploit the relevance between the target domain and each domain.
With little training data, we show its transferability by outperforming prior best model by 13.9% on average.
arXiv Detail & Related papers (2020-04-23T08:17:22Z) - Unsupervised Domain Clusters in Pretrained Language Models [61.832234606157286]
We show that massive pre-trained language models implicitly learn sentence representations that cluster by domains without supervision.
We propose domain data selection methods based on such models.
We evaluate our data selection methods for neural machine translation across five diverse domains.
arXiv Detail & Related papers (2020-04-05T06:22:16Z) - Zero-Resource Cross-Domain Named Entity Recognition [68.83177074227598]
Existing models for cross-domain named entity recognition rely on numerous unlabeled corpus or labeled NER training data in target domains.
We propose a cross-domain NER model that does not use any external resources.
arXiv Detail & Related papers (2020-02-14T09:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.