TERMinator: A system for scientific texts processing
- URL: http://arxiv.org/abs/2209.14854v1
- Date: Thu, 29 Sep 2022 15:14:42 GMT
- Title: TERMinator: A system for scientific texts processing
- Authors: Elena Bruches, Olga Tikhobaeva, Yana Dementyeva, Tatiana Batura
- Abstract summary: This paper is devoted to the extraction of entities and semantic relations between them from scientific texts.
We present a dataset that includes annotations for two tasks and develop a system called TERMinator for the study of the influence of language models on term recognition.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper is devoted to the extraction of entities and semantic relations
between them from scientific texts, where we consider scientific terms as
entities. In this paper, we present a dataset that includes annotations for two
tasks and develop a system called TERMinator for the study of the influence of
language models on term recognition and comparison of different approaches for
relation extraction. Experiments show that language models pre-trained on the
target language are not always show the best performance. Also adding some
heuristic approaches may improve the overall quality of the particular task.
The developed tool and the annotated corpus are publicly available at
https://github.com/iis-research-team/terminator and may be useful for other
researchers.
Related papers
- Persian Homograph Disambiguation: Leveraging ParsBERT for Enhanced Sentence Understanding with a Novel Word Disambiguation Dataset [0.0]
We introduce a novel dataset tailored for Persian homograph disambiguation.
Our work encompasses a thorough exploration of various embeddings, evaluated through the cosine similarity method.
We scrutinize the models' performance in terms of Accuracy, Recall, and F1 Score.
arXiv Detail & Related papers (2024-05-24T14:56:36Z) - Human Inspired Progressive Alignment and Comparative Learning for
Grounded Word Acquisition [6.47452771256903]
We take inspiration from how human babies acquire their first language, and developed a computational process for word acquisition through comparative learning.
Motivated by cognitive findings, we generated a small dataset that enables the computation models to compare the similarities and differences of various attributes.
We frame the acquisition of words as not only the information filtration process, but also as representation-symbol mapping.
arXiv Detail & Related papers (2023-07-05T19:38:04Z) - Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics
Interface of LMs Through Agentivity [68.8204255655161]
We present the semantic notion of agentivity as a case study for probing such interactions.
This suggests LMs may potentially serve as more useful tools for linguistic annotation, theory testing, and discovery.
arXiv Detail & Related papers (2023-05-29T16:24:01Z) - Topics in the Haystack: Extracting and Evaluating Topics beyond
Coherence [0.0]
We propose a method that incorporates a deeper understanding of both sentence and document themes.
This allows our model to detect latent topics that may include uncommon words or neologisms.
We present correlation coefficients with human identification of intruder words and achieve near-human level results at the word-intrusion task.
arXiv Detail & Related papers (2023-03-30T12:24:25Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - CitationIE: Leveraging the Citation Graph for Scientific Information
Extraction [89.33938657493765]
We use the citation graph of referential links between citing and cited papers.
We observe a sizable improvement in end-to-end information extraction over the state-of-the-art.
arXiv Detail & Related papers (2021-06-03T03:00:12Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Entity Recognition and Relation Extraction from Scientific and Technical
Texts in Russian [0.0]
This paper is devoted to the study of methods for information extraction from scientific texts on information technology.
Several modifications of methods for the Russian language are proposed.
It also includes the results of experiments comparing a keyword extraction method, vocabulary method, and some methods based on neural networks.
arXiv Detail & Related papers (2020-11-19T13:40:03Z) - GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and
Event Extraction [107.8262586956778]
We introduce graph convolutional networks (GCNs) with universal dependency parses to learn language-agnostic sentence representations.
GCNs struggle to model words with long-range dependencies or are not directly connected in the dependency tree.
We propose to utilize the self-attention mechanism to learn the dependencies between words with different syntactic distances.
arXiv Detail & Related papers (2020-10-06T20:30:35Z) - Information-Theoretic Probing for Linguistic Structure [74.04862204427944]
We propose an information-theoretic operationalization of probing as estimating mutual information.
We evaluate on a set of ten typologically diverse languages often underrepresented in NLP research.
arXiv Detail & Related papers (2020-04-07T01:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.