Automated Extraction of Acronym-Expansion Pairs from Scientific Papers
- URL: http://arxiv.org/abs/2412.01093v1
- Date: Mon, 02 Dec 2024 04:05:49 GMT
- Title: Automated Extraction of Acronym-Expansion Pairs from Scientific Papers
- Authors: Izhar Ali, Million Haileyesus, Serhiy Hnatyshyn, Jan-Lucas Ott, Vasil Hnatyshin,
- Abstract summary: This project addresses challenges posed by the widespread use of abbreviations and acronyms in digital texts.
We propose a novel method that combines document preprocessing, regular expressions, and a large language model to identify abbreviations and map them to their corresponding expansions.
- Score: 0.0
- License:
- Abstract: This project addresses challenges posed by the widespread use of abbreviations and acronyms in digital texts. We propose a novel method that combines document preprocessing, regular expressions, and a large language model to identify abbreviations and map them to their corresponding expansions. The regular expressions alone are often insufficient to extract expansions, at which point our approach leverages GPT-4 to analyze the text surrounding the acronyms. By limiting the analysis to only a small portion of the surrounding text, we mitigate the risk of obtaining incorrect or multiple expansions for an acronym. There are several known challenges in processing text with acronyms, including polysemous acronyms, non-local and ambiguous acronyms. Our approach enhances the precision and efficiency of NLP techniques by addressing these issues with automated acronym identification and disambiguation. This study highlights the challenges of working with PDF files and the importance of document preprocessing. Furthermore, the results of this work show that neither regular expressions nor GPT-4 alone can perform well. Regular expressions are suitable for identifying acronyms but have limitations in finding their expansions within the paper due to a variety of formats used for expressing acronym-expansion pairs and the tendency of authors to omit expansions within the text. GPT-4, on the other hand, is an excellent tool for obtaining expansions but struggles with correctly identifying all relevant acronyms. Additionally, GPT-4 poses challenges due to its probabilistic nature, which may lead to slightly different results for the same input. Our algorithm employs preprocessing to eliminate irrelevant information from the text, regular expressions for identifying acronyms, and a large language model to help find acronym expansions to provide the most accurate and consistent results.
Related papers
- TextSleuth: Towards Explainable Tampered Text Detection [49.88698441048043]
We propose to explain the basis of tampered text detection with natural language via large multimodal models.
To fill the data gap for this task, we propose a large-scale, comprehensive dataset, ETTD.
Elaborate queries are introduced to generate high-quality anomaly descriptions with GPT4o.
To automatically filter out low-quality annotations, we also propose to prompt GPT4o to recognize tampered texts.
arXiv Detail & Related papers (2024-12-19T13:10:03Z) - Evaluating and Improving ChatGPT-Based Expansion of Abbreviations [6.900119856872516]
We present the first empirical study on large language models (LLMs)-based abbreviation expansion.
Our evaluation results suggest that ChatGPT is substantially less accurate than the state-of-the-art approach.
In response to the first cause, we investigated the effect of various contexts and found surrounding source code is the best selection.
arXiv Detail & Related papers (2024-10-31T12:20:24Z) - ExpLLM: Towards Chain of Thought for Facial Expression Recognition [61.49849866937758]
We propose a novel method called ExpLLM to generate an accurate chain of thought (CoT) for facial expression recognition.
Specifically, we have designed the CoT mechanism from three key perspectives: key observations, overall emotional interpretation, and conclusion.
In experiments on the RAF-DB and AffectNet datasets, ExpLLM outperforms current state-of-the-art FER methods.
arXiv Detail & Related papers (2024-09-04T15:50:16Z) - Out of Length Text Recognition with Sub-String Matching [54.63761108308825]
In this paper, we term this task Out of Length (OOL) text recognition.
We propose a novel method called OOL Text Recognition with sub-String Matching (SMTR)
SMTR comprises two cross-attention-based modules: one encodes a sub-string containing multiple characters into next and previous queries, and the other employs the queries to attend to the image features.
arXiv Detail & Related papers (2024-07-17T05:02:17Z) - LDKP: A Dataset for Identifying Keyphrases from Long Scientific
Documents [48.84086818702328]
Identifying keyphrases (KPs) from text documents is a fundamental task in natural language processing and information retrieval.
Vast majority of the benchmark datasets for this task are from the scientific domain containing only the document title and abstract information.
This presents three challenges for real-world applications: human-written summaries are unavailable for most documents, the documents are almost always long, and a high percentage of KPs are directly found beyond the limited context of title and abstract.
arXiv Detail & Related papers (2022-03-29T08:44:57Z) - CABACE: Injecting Character Sequence Information and Domain Knowledge
for Enhanced Acronym and Long-Form Extraction [0.0]
We propose a novel framework CABACE: Character-Aware BERT for ACronym Extraction.
It takes into account character sequences in text and is adapted to scientific and legal domains by masked language modelling.
We show that the proposed framework is better suited than baseline models for zero-shot generalization to non-English languages.
arXiv Detail & Related papers (2021-12-25T14:03:09Z) - PSG: Prompt-based Sequence Generation for Acronym Extraction [26.896811663334162]
We propose a Prompt-based Sequence Generation (PSG) method for the acronym extraction task.
Specifically, we design a template for prompting the extracted acronym texts with auto-regression.
A position extraction algorithm is designed for extracting the position of the generated answers.
arXiv Detail & Related papers (2021-11-29T02:14:38Z) - BERT-based Acronym Disambiguation with Multiple Training Strategies [8.82012912690778]
Acronym disambiguation (AD) task aims to find the correct expansions of an ambiguous ancronym in a given sentence.
We propose a binary classification model incorporating BERT and several training strategies including dynamic negative sample selection.
Experiments on SciAD show the effectiveness of our proposed model and our score ranks 1st in SDU@AAAI-21 shared task 2: Acronym Disambiguation.
arXiv Detail & Related papers (2021-02-25T05:40:21Z) - Acronym Identification and Disambiguation Shared Tasks for Scientific
Document Understanding [41.63345823743157]
Acronyms are short forms of longer phrases frequently used in writing.
Every text understanding tool should be capable of recognizing acronyms in text.
To push forward research in this direction, we have organized two shared task for acronym identification and acronym disambiguation in scientific documents.
arXiv Detail & Related papers (2020-12-22T00:29:15Z) - What Does This Acronym Mean? Introducing a New Dataset for Acronym
Identification and Disambiguation [74.42107665213909]
Acronyms are the short forms of phrases that facilitate conveying lengthy sentences in documents and serve as one of the mainstays of writing.
Due to their importance, identifying acronyms and corresponding phrases (AI) and finding the correct meaning of each acronym (i.e., acronym disambiguation (AD)) are crucial for text understanding.
Despite the recent progress on this task, there are some limitations in the existing datasets which hinder further improvement.
arXiv Detail & Related papers (2020-10-28T00:12:36Z) - Enabling Language Models to Fill in the Blanks [81.59381915581892]
We present a simple approach for text infilling, the task of predicting missing spans of text at any position in a document.
We train (or fine-tune) off-the-shelf language models on sequences containing the concatenation of artificially-masked text and the text which was masked.
We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics.
arXiv Detail & Related papers (2020-05-11T18:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.