Neural Entity Linking on Technical Service Tickets
- URL: http://arxiv.org/abs/2005.07604v2
- Date: Tue, 19 May 2020 14:11:16 GMT
- Title: Neural Entity Linking on Technical Service Tickets
- Authors: Nadja Kurz, Felix Hamann, Adrian Ulges
- Abstract summary: We show that a neural approach outperforms and complements hand-coded entities, with improvements of about 20% top-1 accuracy.
We also show that a simple sentence-wise encoding (Bi-Encoder) offers a fast yet efficient search in practice.
- Score: 1.3621712165154805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entity linking, the task of mapping textual mentions to known entities, has
recently been tackled using contextualized neural networks. We address the
question whether these results -- reported for large, high-quality datasets
such as Wikipedia -- transfer to practical business use cases, where labels are
scarce, text is low-quality, and terminology is highly domain-specific. Using
an entity linking model based on BERT, a popular transformer network in natural
language processing, we show that a neural approach outperforms and complements
hand-coded heuristics, with improvements of about 20% top-1 accuracy. Also, the
benefits of transfer learning on a large corpus are demonstrated, while
fine-tuning proves difficult. Finally, we compare different BERT-based
architectures and show that a simple sentence-wise encoding (Bi-Encoder) offers
a fast yet efficient search in practice.
Related papers
- Text-Video Retrieval with Global-Local Semantic Consistent Learning [122.15339128463715]
We propose a simple yet effective method, Global-Local Semantic Consistent Learning (GLSCL)
GLSCL capitalizes on latent shared semantics across modalities for text-video retrieval.
Our method achieves comparable performance with SOTA as well as being nearly 220 times faster in terms of computational cost.
arXiv Detail & Related papers (2024-05-21T11:59:36Z) - Sentiment analysis in Tourism: Fine-tuning BERT or sentence embeddings
concatenation? [0.0]
We conduct a comparative study between Fine-Tuning the Bidirectional Representations from Transformers and a method of concatenating two embeddings to boost the performance of a stacked Bidirectional Long Short-Term Memory-Bidirectional Gated Recurrent Units model.
A search for the best learning rate was made at the level of the two approaches, and a comparison of the best embeddings was made for each sentence embedding combination.
arXiv Detail & Related papers (2023-12-12T23:23:23Z) - Hierarchical Transformer Model for Scientific Named Entity Recognition [0.20646127669654832]
We present a simple and effective approach for Named Entity Recognition.
The main idea of our approach is to encode the input subword sequence with a pre-trained transformer such as BERT.
We evaluate our approach on three benchmark datasets for scientific NER.
arXiv Detail & Related papers (2022-03-28T12:59:06Z) - Hierarchical Neural Network Approaches for Long Document Classification [3.6700088931938835]
We employ pre-trained Universal Sentence (USE) and Bidirectional Representations from Transformers (BERT) in a hierarchical setup to capture better representations efficiently.
Our proposed models are conceptually simple where we divide the input data into chunks and then pass this through base models of BERT and USE.
We show that USE + CNN/LSTM performs better than its stand-alone baseline. Whereas the BERT + CNN/LSTM performs on par with its stand-alone counterpart.
arXiv Detail & Related papers (2022-01-18T07:17:40Z) - KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization
for Relation Extraction [111.74812895391672]
We propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt)
We inject latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words.
arXiv Detail & Related papers (2021-04-15T17:57:43Z) - Dependency Parsing based Semantic Representation Learning with Graph
Neural Network for Enhancing Expressiveness of Text-to-Speech [49.05471750563229]
We propose a semantic representation learning method based on graph neural network, considering dependency relations of a sentence.
We show that our proposed method outperforms the baseline using vanilla BERT features both in LJSpeech and Bilzzard Challenge 2013 datasets.
arXiv Detail & Related papers (2021-04-14T13:09:51Z) - KI-BERT: Infusing Knowledge Context for Better Language and Domain
Understanding [0.0]
We propose a technique to infuse knowledge context from knowledge graphs for conceptual and ambiguous entities into models based on transformer architecture.
Our novel technique project knowledge graph embedding in the homogeneous vector-space, introduces new token-types for entities, align entity position ids, and a selective attention mechanism.
We take BERT as a baseline model and implement "KnowledgeInfused BERT" by infusing knowledge context from ConceptNet and WordNet.
arXiv Detail & Related papers (2021-04-09T16:15:31Z) - A Novel Deep Learning Method for Textual Sentiment Analysis [3.0711362702464675]
This paper proposes a convolutional neural network integrated with a hierarchical attention layer to extract informative words.
The proposed model has higher classification accuracy and can extract informative words.
Applying incremental transfer learning can significantly enhance the classification performance.
arXiv Detail & Related papers (2021-02-23T12:11:36Z) - R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic
Matching [58.72111690643359]
We propose a Relation of Relation Learning Network (R2-Net) for sentence semantic matching.
We first employ BERT to encode the input sentences from a global perspective.
Then a CNN-based encoder is designed to capture keywords and phrase information from a local perspective.
To fully leverage labels for better relation information extraction, we introduce a self-supervised relation of relation classification task.
arXiv Detail & Related papers (2020-12-16T13:11:30Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - Probing Linguistic Features of Sentence-Level Representations in Neural
Relation Extraction [80.38130122127882]
We introduce 14 probing tasks targeting linguistic properties relevant to neural relation extraction (RE)
We use them to study representations learned by more than 40 different encoder architecture and linguistic feature combinations trained on two datasets.
We find that the bias induced by the architecture and the inclusion of linguistic features are clearly expressed in the probing task performance.
arXiv Detail & Related papers (2020-04-17T09:17:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.