Information Retrieval in Friction Stir Welding of Aluminum Alloys by
using Natural Language Processing based Algorithms
- URL: http://arxiv.org/abs/2204.12309v1
- Date: Mon, 25 Apr 2022 16:36:00 GMT
- Title: Information Retrieval in Friction Stir Welding of Aluminum Alloys by
using Natural Language Processing based Algorithms
- Authors: Akshansh Mishra
- Abstract summary: Text summarization is a technique for condensing a big piece of text into a few key elements that give a general impression of the content.
Natural Language Processing (NLP) is the sub-division of Artificial Intelligence that narrows down the gap between technology and human cognition.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text summarization is a technique for condensing a big piece of text into a
few key elements that give a general impression of the content. When someone
requires a quick and precise summary of a large amount of information, it
becomes vital. If done manually, summarizing text can be costly and
time-consuming. Natural Language Processing (NLP) is the sub-division of
Artificial Intelligence that narrows down the gap between technology and human
cognition by extracting the relevant information from the pile of data. In the
present work, scientific information regarding the Friction Stir Welding of
Aluminum alloys was collected from the abstract of scholarly research papers.
For extracting the relevant information from these research abstracts four
Natural Language Processing based algorithms i.e. Latent Semantic Analysis
(LSA), Luhn Algorithm, Lex Rank Algorithm, and KL-Algorithm were used. In order
to evaluate the accuracy score of these algorithms, Recall-Oriented Understudy
for Gisting Evaluation (ROUGE) was used. The results showed that the Luhn
Algorithm resulted in the highest f1-Score of 0.413 in comparison to other
algorithms.
Related papers
- GigaCheck: Detecting LLM-generated Content [72.27323884094953]
In this work, we investigate the task of generated text detection by proposing the GigaCheck.
Our research explores two approaches: (i) distinguishing human-written texts from LLM-generated ones, and (ii) detecting LLM-generated intervals in Human-Machine collaborative texts.
Specifically, we use a fine-tuned general-purpose LLM in conjunction with a DETR-like detection model, adapted from computer vision, to localize artificially generated intervals within text.
arXiv Detail & Related papers (2024-10-31T08:30:55Z) - A Universal Prompting Strategy for Extracting Process Model Information from Natural Language Text using Large Language Models [0.8899670429041453]
We show that generative large language models (LLMs) can solve NLP tasks with very high quality without the need for extensive data.
Based on a novel prompting strategy, we show that LLMs are able to outperform state-of-the-art machine learning approaches.
arXiv Detail & Related papers (2024-07-26T06:39:35Z) - From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models [63.188607839223046]
This survey focuses on the benefits of scaling compute during inference.
We explore three areas under a unified mathematical formalism: token-level generation algorithms, meta-generation algorithms, and efficient generation.
arXiv Detail & Related papers (2024-06-24T17:45:59Z) - The CLRS-Text Algorithmic Reasoning Language Benchmark [48.45201665463275]
CLRS-Text is a textual version of the CLRS benchmark.
CLRS-Text is capable of procedurally generating trace data for thirty diverse, challenging algorithmic tasks.
We fine-tune and evaluate various LMs as generalist executors on this benchmark.
arXiv Detail & Related papers (2024-06-06T16:29:25Z) - Performance Prediction of Data-Driven Knowledge summarization of High
Entropy Alloys (HEAs) literature implementing Natural Language Processing
algorithms [0.0]
The goal of natural language processing (NLP) is to get a machine intelligence to process words the same way a human brain does.
Five NLP algorithms, namely, Geneism, Sumy, Luhn, Latent Semantic Analysis (LSA), and Kull-back-Liebler (KL) al-gorithm, are implemented.
Luhn algorithm has the highest accuracy score for the knowledge summarization tasks compared to the other used algorithms.
arXiv Detail & Related papers (2023-11-06T16:22:32Z) - Relation-aware Ensemble Learning for Knowledge Graph Embedding [68.94900786314666]
We propose to learn an ensemble by leveraging existing methods in a relation-aware manner.
exploring these semantics using relation-aware ensemble leads to a much larger search space than general ensemble methods.
We propose a divide-search-combine algorithm RelEns-DSC that searches the relation-wise ensemble weights independently.
arXiv Detail & Related papers (2023-10-13T07:40:12Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - Graph-based Semantical Extractive Text Analysis [0.0]
In this work, we improve the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text.
Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework.
arXiv Detail & Related papers (2022-12-19T18:30:26Z) - Using the Full-text Content of Academic Articles to Identify and
Evaluate Algorithm Entities in the Domain of Natural Language Processing [7.163189900803623]
This article takes the field of natural language processing (NLP) as an example and identifies algorithms from academic papers in the field.
A dictionary of algorithms is constructed by manually annotating the contents of papers, and sentences containing algorithms in the dictionary are extracted through dictionary-based matching.
The number of articles mentioning an algorithm is used as an indicator to analyze the influence of that algorithm.
arXiv Detail & Related papers (2020-10-21T08:24:18Z) - Feature Extraction of Text for Deep Learning Algorithms: Application on
Fake News Detection [0.0]
It will be shown that by using deep learning algorithms and alphabet frequencies of the original text of a news without any information about the sequence of the alphabet can actually be used to classify fake news and trustworthy ones in high accuracy.
It seems that alphabet frequencies contains some useful features for understanding complex context or meaning of the original text.
arXiv Detail & Related papers (2020-10-12T07:43:01Z) - Discovering Reinforcement Learning Algorithms [53.72358280495428]
Reinforcement learning algorithms update an agent's parameters according to one of several possible rules.
This paper introduces a new meta-learning approach that discovers an entire update rule.
It includes both 'what to predict' (e.g. value functions) and 'how to learn from it' by interacting with a set of environments.
arXiv Detail & Related papers (2020-07-17T07:38:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.