Using contextual sentence analysis models to recognize ESG concepts
- URL: http://arxiv.org/abs/2207.01402v1
- Date: Mon, 4 Jul 2022 13:33:21 GMT
- Title: Using contextual sentence analysis models to recognize ESG concepts
- Authors: Elvys Linhares Pontes and Mohamed Benjannet and Jose G. Moreno and
Antoine Doucet
- Abstract summary: This paper summarizes the joint participation of the Trading Central Labs and the L3i laboratory of the University of La Rochelle on two sub-tasks of the FinSim-4 evaluation campaign.
The first sub-task aims to enrich the 'Fortia ESG taxonomy' with new lexicon entries while the second one aims to classify sentences to either'sustainable' or 'unsustainable' with respect to ESG related factors.
- Score: 8.905370601886112
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper summarizes the joint participation of the Trading Central Labs and
the L3i laboratory of the University of La Rochelle on both sub-tasks of the
Shared Task FinSim-4 evaluation campaign. The first sub-task aims to enrich the
'Fortia ESG taxonomy' with new lexicon entries while the second one aims to
classify sentences to either 'sustainable' or 'unsustainable' with respect to
ESG (Environment, Social and Governance) related factors. For the first
sub-task, we proposed a model based on pre-trained Sentence-BERT models to
project sentences and concepts in a common space in order to better represent
ESG concepts. The official task results show that our system yields a
significant performance improvement compared to the baseline and outperforms
all other submissions on the first sub-task. For the second sub-task, we
combine the RoBERTa model with a feed-forward multi-layer perceptron in order
to extract the context of sentences and classify them. Our model achieved high
accuracy scores (over 92%) and was ranked among the top 5 systems.
Related papers
- A Large-Scale Evaluation of Speech Foundation Models [110.95827399522204]
We establish the Speech processing Universal PERformance Benchmark (SUPERB) to study the effectiveness of the foundation model paradigm for speech.
We propose a unified multi-tasking framework to address speech processing tasks in SUPERB using a frozen foundation model followed by task-specialized, lightweight prediction heads.
arXiv Detail & Related papers (2024-04-15T00:03:16Z) - Co-guiding for Multi-intent Spoken Language Understanding [53.30511968323911]
We propose a novel model termed Co-guiding Net, which implements a two-stage framework achieving the mutual guidances between the two tasks.
For the first stage, we propose single-task supervised contrastive learning, and for the second stage, we propose co-guiding supervised contrastive learning.
Experiment results on multi-intent SLU show that our model outperforms existing models by a large margin.
arXiv Detail & Related papers (2023-11-22T08:06:22Z) - Effective Cross-Task Transfer Learning for Explainable Natural Language
Inference with T5 [50.574918785575655]
We compare sequential fine-tuning with a model for multi-task learning in the context of boosting performance on two tasks.
Our results show that while sequential multi-task learning can be tuned to be good at the first of two target tasks, it performs less well on the second and additionally struggles with overfitting.
arXiv Detail & Related papers (2022-10-31T13:26:08Z) - UU-Tax at SemEval-2022 Task 3: Improving the generalizability of
language models for taxonomy classification through data augmentation [0.0]
This paper addresses the SemEval-2022 Task 3 PreTENS: Presupposed Taxonomies evaluating Neural Network Semantics.
The goal of the task is to identify if a sentence is deemed acceptable or not, depending on the taxonomic relationship that holds between a noun pair contained in the sentence.
We propose an effective way to enhance the robustness and the generalizability of language models for better classification.
arXiv Detail & Related papers (2022-10-07T07:41:28Z) - JARVIS: A Neuro-Symbolic Commonsense Reasoning Framework for
Conversational Embodied Agents [14.70666899147632]
We propose a Neuro-Symbolic Commonsense Reasoning framework for modular, generalizable, and interpretable conversational embodied agents.
Our framework achieves state-of-the-art (SOTA) results on all three dialog-based embodied tasks, including Execution from Dialog History (EDH), Trajectory from Dialog (TfD), and Two-Agent Task Completion (TATC)
Our model ranks first in the Alexa Prize SimBot Public Benchmark Challenge.
arXiv Detail & Related papers (2022-08-28T18:30:46Z) - SUPERB-SG: Enhanced Speech processing Universal PERformance Benchmark
for Semantic and Generative Capabilities [76.97949110580703]
We introduce SUPERB-SG, a new benchmark to evaluate pre-trained models across various speech tasks.
We use a lightweight methodology to test the robustness of representations learned by pre-trained models under shifts in data domain.
We also show that the task diversity of SUPERB-SG coupled with limited task supervision is an effective recipe for evaluating the generalizability of model representation.
arXiv Detail & Related papers (2022-03-14T04:26:40Z) - Controllable Abstractive Dialogue Summarization with Sketch Supervision [56.59357883827276]
Our model achieves state-of-the-art performance on the largest dialogue summarization corpus SAMSum, with as high as 50.79 in ROUGE-L score.
arXiv Detail & Related papers (2021-05-28T19:05:36Z) - IIE-NLP-Eyas at SemEval-2021 Task 4: Enhancing PLM for ReCAM with
Special Tokens, Re-Ranking, Siamese Encoders and Back Translation [8.971288666318719]
This paper introduces our systems for all three subtasks of SemEval-2021 Task 4: Reading of Abstract Meaning.
We well-design many simple and effective approaches adapted to the backbone model (RoBERTa)
Experimental results show that our approaches achieve significant performance compared with the baseline systems.
arXiv Detail & Related papers (2021-02-25T10:51:48Z) - BUT-FIT at SemEval-2020 Task 4: Multilingual commonsense [1.433758865948252]
This paper describes work of the BUT-FIT's team at SemEval 2020 Task 4 - Commonsense Validation and Explanation.
In subtasks A and B, our submissions are based on pretrained language representation models (namely ALBERT) and data augmentation.
We experimented with solving the task for another language, Czech, by means of multilingual models and machine translated dataset.
We show that with a strong machine translation system, our system can be used in another language with a small accuracy loss.
arXiv Detail & Related papers (2020-08-17T12:45:39Z) - Joint Contextual Modeling for ASR Correction and Language Understanding [60.230013453699975]
We propose multi-task neural approaches to perform contextual language correction on ASR outputs jointly with language understanding (LU)
We show that the error rates of off the shelf ASR and following LU systems can be reduced significantly by 14% relative with joint models trained using small amounts of in-domain data.
arXiv Detail & Related papers (2020-01-28T22:09:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.