HYU at SemEval-2022 Task 2: Effective Idiomaticity Detection with
Consideration at Different Levels of Contextualization
- URL: http://arxiv.org/abs/2206.11854v1
- Date: Wed, 1 Jun 2022 10:45:40 GMT
- Title: HYU at SemEval-2022 Task 2: Effective Idiomaticity Detection with
Consideration at Different Levels of Contextualization
- Authors: Youngju Joung, Taeuk Kim
- Abstract summary: We propose a unified framework that enables us to consider various aspects of contextualization at different levels.
We show that our approach is effective in improving the performance of related models.
- Score: 6.850627526999892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a unified framework that enables us to consider various aspects of
contextualization at different levels to better identify the idiomaticity of
multi-word expressions. Through extensive experiments, we demonstrate that our
approach based on the inter- and inner-sentence context of a target MWE is
effective in improving the performance of related models. We also share our
experience in detail on the task of SemEval-2022 Tasks 2 such that future work
on the same task can be benefited from this.
Related papers
- Layer-Wise Analysis of Self-Supervised Acoustic Word Embeddings: A Study
on Speech Emotion Recognition [54.952250732643115]
We study Acoustic Word Embeddings (AWEs), a fixed-length feature derived from continuous representations, to explore their advantages in specific tasks.
AWEs have previously shown utility in capturing acoustic discriminability.
Our findings underscore the acoustic context conveyed by AWEs and showcase the highly competitive Speech Emotion Recognition accuracies.
arXiv Detail & Related papers (2024-02-04T21:24:54Z) - ComOM at VLSP 2023: A Dual-Stage Framework with BERTology and Unified
Multi-Task Instruction Tuning Model for Vietnamese Comparative Opinion Mining [0.6522338519818377]
The ComOM shared task aims to extract comparative opinions from product reviews in Vietnamese language.
We propose a two-stage system based on fine-tuning a BERTology model for the CSI task and unified multi-task instruction tuning for the CEE task.
Experimental results show that our approach outperforms the other competitors and has achieved the top score on the official private test.
arXiv Detail & Related papers (2023-12-14T14:44:59Z) - The Role of Chain-of-Thought in Complex Vision-Language Reasoning Task [51.47803406138838]
The study explores the effectiveness of the Chain-of-Thought approach in improving vision-language tasks.
We present the "Description then Decision" strategy, which is inspired by how humans process signals.
arXiv Detail & Related papers (2023-11-15T18:39:21Z) - Rethinking and Improving Multi-task Learning for End-to-end Speech
Translation [51.713683037303035]
We investigate the consistency between different tasks, considering different times and modules.
We find that the textual encoder primarily facilitates cross-modal conversion, but the presence of noise in speech impedes the consistency between text and speech representations.
We propose an improved multi-task learning (IMTL) approach for the ST task, which bridges the modal gap by mitigating the difference in length and representation.
arXiv Detail & Related papers (2023-11-07T08:48:46Z) - Little Giants: Exploring the Potential of Small LLMs as Evaluation
Metrics in Summarization in the Eval4NLP 2023 Shared Task [53.163534619649866]
This paper focuses on assessing the effectiveness of prompt-based techniques to empower Large Language Models to handle the task of quality estimation.
We conducted systematic experiments with various prompting techniques, including standard prompting, prompts informed by annotator instructions, and innovative chain-of-thought prompting.
Our work reveals that combining these approaches using a "small", open source model (orca_mini_v3_7B) yields competitive results.
arXiv Detail & Related papers (2023-11-01T17:44:35Z) - Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - Solving Dialogue Grounding Embodied Task in a Simulated Environment
using Further Masked Language Modeling [0.0]
Our proposed method employs language modeling to enhance task understanding through state-of-the-art (SOTA) methods using language models.
Our experimental results provide compelling evidence of the superiority of our proposed method.
arXiv Detail & Related papers (2023-06-21T17:17:09Z) - Effective Cross-Task Transfer Learning for Explainable Natural Language
Inference with T5 [50.574918785575655]
We compare sequential fine-tuning with a model for multi-task learning in the context of boosting performance on two tasks.
Our results show that while sequential multi-task learning can be tuned to be good at the first of two target tasks, it performs less well on the second and additionally struggles with overfitting.
arXiv Detail & Related papers (2022-10-31T13:26:08Z) - Improving Speech Translation by Understanding and Learning from the
Auxiliary Text Translation Task [26.703809355057224]
We conduct a detailed analysis to understand the impact of the auxiliary task on the primary task within the multitask learning framework.
Our analysis confirms that multitask learning tends to generate similar decoder representations from different modalities.
Inspired by these findings, we propose three methods to improve translation quality.
arXiv Detail & Related papers (2021-07-12T23:53:40Z) - Zhestyatsky at SemEval-2021 Task 2: ReLU over Cosine Similarity for BERT
Fine-tuning [0.07614628596146598]
This paper presents our contribution to SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC)
Our experiments cover English (EN-EN) sub-track from the multilingual setting of the task.
We find the combination of Cosine Similarity and ReLU activation leading to the most effective fine-tuning procedure.
arXiv Detail & Related papers (2021-04-13T18:28:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.