Acquiring Frame Element Knowledge with Deep Metric Learning for Semantic
Frame Induction
- URL: http://arxiv.org/abs/2305.13944v1
- Date: Tue, 23 May 2023 11:02:28 GMT
- Title: Acquiring Frame Element Knowledge with Deep Metric Learning for Semantic
Frame Induction
- Authors: Kosuke Yamada, Ryohei Sasano, Koichi Takeda
- Abstract summary: We propose a method that applies deep metric learning to semantic frame induction tasks.
A pre-trained language model is fine-tuned to be suitable for distinguishing frame element roles.
Experimental results on FrameNet demonstrate that our method achieves substantially better performance than existing methods.
- Score: 24.486546938073907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The semantic frame induction tasks are defined as a clustering of words into
the frames that they evoke, and a clustering of their arguments according to
the frame element roles that they should fill. In this paper, we address the
latter task of argument clustering, which aims to acquire frame element
knowledge, and propose a method that applies deep metric learning. In this
method, a pre-trained language model is fine-tuned to be suitable for
distinguishing frame element roles through the use of frame-annotated data, and
argument clustering is performed with embeddings obtained from the fine-tuned
model. Experimental results on FrameNet demonstrate that our method achieves
substantially better performance than existing methods.
Related papers
- Taming CLIP for Fine-grained and Structured Visual Understanding of Museum Exhibits [59.66134971408414]
We aim to adapt CLIP for fine-grained and structured understanding of museum exhibits.
Our dataset is the first of its kind in the public domain.
The proposed method (MUZE) learns to map CLIP's image embeddings to the tabular structure by means of a proposed transformer-based parsing network (parseNet)
arXiv Detail & Related papers (2024-09-03T08:13:06Z) - Visual Prompt Selection for In-Context Learning Segmentation [77.15684360470152]
In this paper, we focus on rethinking and improving the example selection strategy.
We first demonstrate that ICL-based segmentation models are sensitive to different contexts.
Furthermore, empirical evidence indicates that the diversity of contextual prompts plays a crucial role in guiding segmentation.
arXiv Detail & Related papers (2024-07-14T15:02:54Z) - RAR: Retrieving And Ranking Augmented MLLMs for Visual Recognition [78.97487780589574]
Multimodal Large Language Models (MLLMs) excel at classifying fine-grained categories.
This paper introduces a Retrieving And Ranking augmented method for MLLMs.
Our proposed approach not only addresses the inherent limitations in fine-grained recognition but also preserves the model's comprehensive knowledge base.
arXiv Detail & Related papers (2024-03-20T17:59:55Z) - Learning Referring Video Object Segmentation from Weak Annotation [78.45828085350936]
Referring video object segmentation (RVOS) is a task that aims to segment the target object in all video frames based on a sentence describing the object.
We propose a new annotation scheme that reduces the annotation effort by 8 times, while providing sufficient supervision for RVOS.
Our scheme only requires a mask for the frame where the object first appears and bounding boxes for the rest of the frames.
arXiv Detail & Related papers (2023-08-04T06:50:52Z) - Semantic Frame Induction with Deep Metric Learning [24.486546938073907]
We propose a model that uses deep metric learning to fine-tune a contextualized embedding model.
We apply the fine-tuned contextualized embeddings to perform semantic frame induction.
arXiv Detail & Related papers (2023-04-27T15:46:09Z) - Knowledge-augmented Frame Semantic Parsing with Hybrid Prompt-tuning [17.6573121083417]
We propose a Knowledge-Augmented Frame Semantic Parsing Architecture (KAF-SPA) to enhance semantic representation.
A Memory-based Knowledge Extraction Module (MKEM) is devised to select accurate frame knowledge and construct the continuous templates.
We also design a Task-oriented Knowledge Probing Module (TKPM) using hybrid prompts to incorporate the selected knowledge into the PLMs and adapt PLMs to the tasks of frame and argument identification.
arXiv Detail & Related papers (2023-03-25T06:41:19Z) - Query Your Model with Definitions in FrameNet: An Effective Method for
Frame Semantic Role Labeling [43.58108941071302]
Frame Semantic Role Labeling (FSRL) identifies arguments and labels them with frame roles defined in FrameNet.
We propose a query-based framework named ArGument Extractor with Definitions in FrameNet (AGED) to mitigate these problems.
arXiv Detail & Related papers (2022-12-05T05:09:12Z) - Prompt-Matched Semantic Segmentation [96.99924127527002]
The objective of this work is to explore how to effectively adapt pre-trained foundation models to various downstream tasks of image semantic segmentation.
We propose a novel Inter-Stage Prompt-Matched Framework, which maintains the original structure of the foundation model while generating visual prompts adaptively for task-oriented tuning.
A lightweight module termed Semantic-aware Prompt Matcher is then introduced to hierarchically interpolate between two stages to learn reasonable prompts for each specific task.
arXiv Detail & Related papers (2022-08-22T09:12:53Z) - Transferring Semantic Knowledge Into Language Encoders [6.85316573653194]
We introduce semantic form mid-tuning, an approach for transferring semantic knowledge from semantic meaning representations into language encoders.
We show that this alignment can be learned implicitly via classification or directly via triplet loss.
Our method yields language encoders that demonstrate improved predictive performance across inference, reading comprehension, textual similarity, and other semantic tasks.
arXiv Detail & Related papers (2021-10-14T14:11:12Z) - Semantic Frame Induction using Masked Word Embeddings and Two-Step
Clustering [9.93359829907774]
We propose a semantic frame induction method using masked word embeddings and two-step clustering.
We demonstrate that using the masked word embeddings is effective for avoiding too much reliance on the surface information of frame-evoking verbs.
arXiv Detail & Related papers (2021-05-27T22:00:33Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.