Exploiting Network Structures to Improve Semantic Representation for the
Financial Domain
- URL: http://arxiv.org/abs/2107.05885v1
- Date: Tue, 13 Jul 2021 07:32:18 GMT
- Title: Exploiting Network Structures to Improve Semantic Representation for the
Financial Domain
- Authors: Chao Feng, Shi-jie We
- Abstract summary: This paper presents the MiniTrue team in the FinSim-3 shared task on learning semantic similarities for the financial domain in English language.
Our approach combines contextual embeddings learned by transformer-based language models with network structures embeddings extracted on external knowledge sources.
Experimental results show that the model with the knowledge graph embeddings has achieved a superior result than these models with only contextual embeddings.
- Score: 9.13755431537592
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents the participation of the MiniTrue team in the FinSim-3
shared task on learning semantic similarities for the financial domain in
English language. Our approach combines contextual embeddings learned by
transformer-based language models with network structures embeddings extracted
on external knowledge sources, to create more meaningful representations of
financial domain entities and terms. For this, two BERT based language models
and a knowledge graph embedding model are used. Besides, we propose a voting
function to joint three basic models for the final inference. Experimental
results show that the model with the knowledge graph embeddings has achieved a
superior result than these models with only contextual embeddings.
Nevertheless, we also observe that our voting function brings an extra benefit
to the final system.
Related papers
- Enhancing Language Models for Financial Relation Extraction with Named Entities and Part-of-Speech [5.104305392215512]
FinRE task involves identifying the entities and their relation, given a piece of financial statement/text.
We propose a strategy that improves the performance of pre-trained language models by augmenting them with Named Entity Recognition (NER) and Part-Of-Speech (POS)
Experiments on a financial relations dataset show promising results and highlights the benefits of incorporating NER and POS in existing models.
arXiv Detail & Related papers (2024-05-02T14:33:05Z) - Towards Graph Foundation Models: A Survey and Beyond [66.37994863159861]
Foundation models have emerged as critical components in a variety of artificial intelligence applications.
The capabilities of foundation models to generalize and adapt motivate graph machine learning researchers to discuss the potential of developing a new graph learning paradigm.
This article introduces the concept of Graph Foundation Models (GFMs), and offers an exhaustive explanation of their key characteristics and underlying technologies.
arXiv Detail & Related papers (2023-10-18T09:31:21Z) - Modeling Multi-Granularity Hierarchical Features for Relation Extraction [26.852869800344813]
We propose a novel method to extract multi-granularity features based solely on the original input sentences.
We show that effective structured features can be attained even without external knowledge.
arXiv Detail & Related papers (2022-04-09T09:44:05Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - Incorporating Linguistic Knowledge for Abstractive Multi-document
Summarization [20.572283625521784]
We develop a neural network based abstractive multi-document summarization (MDS) model.
We process the dependency information into the linguistic-guided attention mechanism.
With the help of linguistic signals, sentence-level relations can be correctly captured.
arXiv Detail & Related papers (2021-09-23T08:13:35Z) - Semantic Representation and Inference for NLP [2.969705152497174]
This thesis investigates the use of deep learning for novel semantic representation and inference.
We contribute the largest publicly available dataset of real-life factual claims for the purpose of automatic claim verification.
We operationalize the compositionality of a phrase contextually by enriching the phrase representation with external word embeddings and knowledge graphs.
arXiv Detail & Related papers (2021-06-15T13:22:48Z) - Fusing Context Into Knowledge Graph for Commonsense Reasoning [21.33294077354958]
We propose to utilize external entity description to provide contextual information for graph entities.
For the CommonsenseQA task, our model first extracts concepts from the question and choice, and then finds a related triple between these concepts.
We achieve state-of-the-art results in the CommonsenseQA dataset with an accuracy of 80.7% (single model) and 83.3% (ensemble model) on the official leaderboard.
arXiv Detail & Related papers (2020-12-09T00:57:49Z) - Neuro-Symbolic Representations for Video Captioning: A Case for
Leveraging Inductive Biases for Vision and Language [148.0843278195794]
We propose a new model architecture for learning multi-modal neuro-symbolic representations for video captioning.
Our approach uses a dictionary learning-based method of learning relations between videos and their paired text descriptions.
arXiv Detail & Related papers (2020-11-18T20:21:19Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - DomBERT: Domain-oriented Language Model for Aspect-based Sentiment
Analysis [71.40586258509394]
We propose DomBERT, an extension of BERT to learn from both in-domain corpus and relevant domain corpora.
Experiments are conducted on an assortment of tasks in aspect-based sentiment analysis, demonstrating promising results.
arXiv Detail & Related papers (2020-04-28T21:07:32Z) - Object Relational Graph with Teacher-Recommended Learning for Video
Captioning [92.48299156867664]
We propose a complete video captioning system including both a novel model and an effective training strategy.
Specifically, we propose an object relational graph (ORG) based encoder, which captures more detailed interaction features to enrich visual representation.
Meanwhile, we design a teacher-recommended learning (TRL) method to make full use of the successful external language model (ELM) to integrate the abundant linguistic knowledge into the caption model.
arXiv Detail & Related papers (2020-02-26T15:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.