A Comprehensive Review on Summarizing Financial News Using Deep Learning
- URL: http://arxiv.org/abs/2109.10118v1
- Date: Tue, 21 Sep 2021 12:00:31 GMT
- Title: A Comprehensive Review on Summarizing Financial News Using Deep Learning
- Authors: Saurabh Kamal and Sahil Sharma
- Abstract summary: Natural Language Processing techniques are typically used to deal with such a large amount of data and get valuable information out of it.
In this research, embedding techniques used are BoW, TF-IDF, Word2Vec, BERT, GloVe, and FastText, and then fed to deep learning models such as RNN and LSTM.
It was expected that Deep Leaming would be applied to get the desired results or achieve better accuracy than the state-of-the-art.
- Score: 8.401473551081747
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Investors make investment decisions depending on several factors such as
fundamental analysis, technical analysis, and quantitative analysis. Another
factor on which investors can make investment decisions is through sentiment
analysis of news headlines, the sole purpose of this study. Natural Language
Processing techniques are typically used to deal with such a large amount of
data and get valuable information out of it. NLP algorithms convert raw text
into numerical representations that machines can easily understand and
interpret. This conversion can be done using various embedding techniques. In
this research, embedding techniques used are BoW, TF-IDF, Word2Vec, BERT,
GloVe, and FastText, and then fed to deep learning models such as RNN and LSTM.
This work aims to evaluate these model's performance to choose the robust model
in identifying the significant factors influencing the prediction. During this
research, it was expected that Deep Leaming would be applied to get the desired
results or achieve better accuracy than the state-of-the-art. The models are
compared to check their outputs to know which one has performed better.
Related papers
- The Economic Implications of Large Language Model Selection on Earnings and Return on Investment: A Decision Theoretic Model [0.0]
We use a decision-theoretic approach to compare the financial impact of different language models.
The study reveals how the superior accuracy of more expensive models can, under certain conditions, justify a greater investment.
This article provides a framework for companies looking to optimize their technology choices.
arXiv Detail & Related papers (2024-05-27T20:08:41Z) - ECC Analyzer: Extract Trading Signal from Earnings Conference Calls using Large Language Model for Stock Performance Prediction [8.922126245005336]
This study introduces a novel framework: textbfECC Analyzer, combining Large Language Models (LLMs) and multi-modal techniques to extract richer, more predictive insights.
The model begins by summarizing the transcript's structure and analyzing the speakers' mode and confidence level.
It then uses the Retrieval-Augmented Generation (RAG) based methods to meticulously extract the focuses that have a significant impact on stock performance.
arXiv Detail & Related papers (2024-04-29T07:11:39Z) - Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective [106.92016199403042]
We empirically investigate knowledge transfer from larger to smaller models through a parametric perspective.
We employ sensitivity-based techniques to extract and align knowledge-specific parameters between different large language models.
Our findings highlight the critical factors contributing to the process of parametric knowledge transfer.
arXiv Detail & Related papers (2023-10-17T17:58:34Z) - Towards reducing hallucination in extracting information from financial
reports using Large Language Models [1.2289361708127877]
We show how Large Language Models (LLMs) can efficiently and rapidly extract information from earnings report transcripts.
We evaluate the outcomes of various LLMs with and without using our proposed approach based on various objective metrics for evaluating Q&A systems.
arXiv Detail & Related papers (2023-10-16T18:45:38Z) - Metric Tools for Sensitivity Analysis with Applications to Neural
Networks [0.0]
Explainable Artificial Intelligence (XAI) aims to provide interpretations for predictions made by Machine Learning models.
In this paper, a theoretical framework is proposed to study sensitivities of ML models using metric techniques.
A complete family of new quantitative metrics called $alpha$-curves is extracted.
arXiv Detail & Related papers (2023-05-03T18:10:21Z) - Pre-trained Embeddings for Entity Resolution: An Experimental Analysis
[Experiment, Analysis & Benchmark] [65.11858854040544]
We perform a thorough experimental analysis of 12 popular language models over 17 established benchmark datasets.
First, we assess their vectorization overhead for converting all input entities into dense embeddings vectors.
Second, we investigate their blocking performance, performing a detailed scalability analysis, and comparing them with the state-of-the-art deep learning-based blocking method.
Third, we conclude with their relative performance for both supervised and unsupervised matching.
arXiv Detail & Related papers (2023-04-24T08:53:54Z) - Application of Transformers based methods in Electronic Medical Records:
A Systematic Literature Review [77.34726150561087]
This work presents a systematic literature review of state-of-the-art advances using transformer-based methods on electronic medical records (EMRs) in different NLP tasks.
arXiv Detail & Related papers (2023-04-05T22:19:42Z) - Analyzing Machine Learning Models for Credit Scoring with Explainable AI
and Optimizing Investment Decisions [0.0]
This paper examines two different yet related questions related to explainable AI (XAI) practices.
The study compares various machine learning models, including single classifiers (logistic regression, decision trees, LDA, QDA), heterogeneous ensembles (AdaBoost, Random Forest), and sequential neural networks.
Two advanced post-hoc model explainability techniques - LIME and SHAP are utilized to assess ML-based credit scoring models.
arXiv Detail & Related papers (2022-09-19T21:44:42Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Sentiment Analysis Based on Deep Learning: A Comparative Study [69.09570726777817]
The study of public opinion can provide us with valuable information.
The efficiency and accuracy of sentiment analysis is being hindered by the challenges encountered in natural language processing.
This paper reviews the latest studies that have employed deep learning to solve sentiment analysis problems.
arXiv Detail & Related papers (2020-06-05T16:28:10Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.