Performance Prediction of Data-Driven Knowledge summarization of High
Entropy Alloys (HEAs) literature implementing Natural Language Processing
algorithms
- URL: http://arxiv.org/abs/2311.07584v1
- Date: Mon, 6 Nov 2023 16:22:32 GMT
- Title: Performance Prediction of Data-Driven Knowledge summarization of High
Entropy Alloys (HEAs) literature implementing Natural Language Processing
algorithms
- Authors: Akshansh Mishra, Vijaykumar S Jatti, Vaishnavi More, Anish Dasgupta,
Devarrishi Dixit and Eyob Messele Sefene
- Abstract summary: The goal of natural language processing (NLP) is to get a machine intelligence to process words the same way a human brain does.
Five NLP algorithms, namely, Geneism, Sumy, Luhn, Latent Semantic Analysis (LSA), and Kull-back-Liebler (KL) al-gorithm, are implemented.
Luhn algorithm has the highest accuracy score for the knowledge summarization tasks compared to the other used algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to interpret spoken language is connected to natural language
processing. It involves teaching the AI how words relate to one another, how
they are meant to be used, and in what settings. The goal of natural language
processing (NLP) is to get a machine intelligence to process words the same way
a human brain does. This enables machine intelligence to interpret, arrange,
and comprehend textual data by processing the natural language. The technology
can comprehend what is communicated, whether it be through speech or writing
because AI pro-cesses language more quickly than humans can. In the present
study, five NLP algorithms, namely, Geneism, Sumy, Luhn, Latent Semantic
Analysis (LSA), and Kull-back-Liebler (KL) al-gorithm, are implemented for the
first time for the knowledge summarization purpose of the High Entropy Alloys
(HEAs). The performance prediction of these algorithms is made by using the
BLEU score and ROUGE score. The results showed that the Luhn algorithm has the
highest accuracy score for the knowledge summarization tasks compared to the
other used algorithms.
Related papers
- Training Neural Networks as Recognizers of Formal Languages [87.06906286950438]
Formal language theory pertains specifically to recognizers.
It is common to instead use proxy tasks that are similar in only an informal sense.
We correct this mismatch by training and evaluating neural networks directly as binary classifiers of strings.
arXiv Detail & Related papers (2024-11-11T16:33:25Z) - A Quantum-Inspired Analysis of Human Disambiguation Processes [0.0]
In this thesis, we apply formalisms arising from foundational quantum mechanics to study ambiguities arising from linguistics.
Results were subsequently used to predict human behaviour and outperformed current NLP methods.
arXiv Detail & Related papers (2024-08-14T09:21:23Z) - From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models [63.188607839223046]
This survey focuses on the benefits of scaling compute during inference.
We explore three areas under a unified mathematical formalism: token-level generation algorithms, meta-generation algorithms, and efficient generation.
arXiv Detail & Related papers (2024-06-24T17:45:59Z) - Executing Natural Language-Described Algorithms with Large Language Models: An Investigation [48.461999568129166]
We examine the capacity of present-day large language models to comprehend and execute algorithms outlined in natural language.
We selected 30 algorithms, generated 300 random-sampled instances, and evaluated whether popular LLMs can understand and execute these algorithms.
Our findings reveal that LLMs, notably GPT-4, can effectively execute programs described in natural language, as long as no heavy numeric computation is involved.
arXiv Detail & Related papers (2024-02-23T05:31:36Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - Fast Quantum Algorithm for Attention Computation [18.44025861624981]
Large language models (LLMs) have demonstrated exceptional performance across a wide range of tasks.
The attention scheme plays a crucial role in the architecture of large language models (LLMs)
It is well-known that quantum machine computation has certain computational advantages compared to the classical machine.
arXiv Detail & Related papers (2023-07-16T14:00:42Z) - AI2: The next leap toward native language based and explainable machine
learning framework [1.827510863075184]
The proposed framework, named AI$2$, uses a natural language interface that allows a non-specialist to benefit from machine learning algorithms.
The primary contribution of the AI$2$ framework allows a user to call the machine learning algorithms in English, making its interface usage easier.
Another contribution is a preprocessing module that helps to describe and to load data properly.
arXiv Detail & Related papers (2023-01-09T14:48:35Z) - Information Retrieval in Friction Stir Welding of Aluminum Alloys by
using Natural Language Processing based Algorithms [0.0]
Text summarization is a technique for condensing a big piece of text into a few key elements that give a general impression of the content.
Natural Language Processing (NLP) is the sub-division of Artificial Intelligence that narrows down the gap between technology and human cognition.
arXiv Detail & Related papers (2022-04-25T16:36:00Z) - How to transfer algorithmic reasoning knowledge to learn new algorithms? [23.335939830754747]
We investigate how we can use algorithms for which we have access to the execution trace to learn to solve similar tasks for which we do not.
We create a dataset including 9 algorithms and 3 different graph types.
We validate this empirically and show how instead multi-task learning can be used to achieve the transfer of algorithmic reasoning knowledge.
arXiv Detail & Related papers (2021-10-26T22:14:47Z) - ERICA: Improving Entity and Relation Understanding for Pre-trained
Language Models via Contrastive Learning [97.10875695679499]
We propose a novel contrastive learning framework named ERICA in pre-training phase to obtain a deeper understanding of the entities and their relations in text.
Experimental results demonstrate that our proposed ERICA framework achieves consistent improvements on several document-level language understanding tasks.
arXiv Detail & Related papers (2020-12-30T03:35:22Z) - Strong Generalization and Efficiency in Neural Programs [69.18742158883869]
We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction.
By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes.
arXiv Detail & Related papers (2020-07-07T17:03:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.