Do LLMs Encode Frame Semantics? Evidence from Frame Identification
- URL: http://arxiv.org/abs/2509.19540v1
- Date: Tue, 23 Sep 2025 20:09:32 GMT
- Title: Do LLMs Encode Frame Semantics? Evidence from Frame Identification
- Authors: Jayanth Krishna Chundru, Rudrashis Poddar, Jie Cao, Tianyu Jiang,
- Abstract summary: We investigate whether large language models encode latent knowledge of frame semantics, focusing on frame identification.<n>We evaluate models under prompt-based inference and observe that they can perform frame identification effectively even without explicit supervision.
- Score: 4.508786802660182
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We investigate whether large language models encode latent knowledge of frame semantics, focusing on frame identification, a core challenge in frame semantic parsing that involves selecting the appropriate semantic frame for a target word in context. Using the FrameNet lexical resource, we evaluate models under prompt-based inference and observe that they can perform frame identification effectively even without explicit supervision. To assess the impact of task-specific training, we fine-tune the model on FrameNet data, which substantially improves in-domain accuracy while generalizing well to out-of-domain benchmarks. Further analysis shows that the models can generate semantically coherent frame definitions, highlighting the model's internalized understanding of frame semantics.
Related papers
- DynaPURLS: Dynamic Refinement of Part-aware Representations for Skeleton-based Zero-Shot Action Recognition [51.80782323686666]
We introduce textbfDynaPURLS, a unified framework that establishes robust, multi-scale visual-semantic correspondences.<n>Our framework leverages a large language model to generate hierarchical textual descriptions that encompass both global movements and local body-part dynamics.<n>Experiments on three large-scale benchmark datasets, including NTU RGB+D 60/120 and PKU-MMD, demonstrate that DynaPURLS significantly outperforms prior art.
arXiv Detail & Related papers (2025-12-12T10:39:10Z) - FrameEOL: Semantic Frame Induction using Causal Language Models [18.542847631796725]
We propose a new method for semantic frame induction based on causal language models (CLMs)<n>We leverage in-context learning (ICL) and deep metric learning (DML) to obtain embeddings more suitable for frame induction.<n> Experimental results on the English and Japanese FrameNet demonstrate that the proposed methods outperform existing frame induction methods.
arXiv Detail & Related papers (2025-10-10T07:52:07Z) - FrameMind: Frame-Interleaved Video Reasoning via Reinforcement Learning [65.42201665046505]
Current video understanding models rely on fixed frame sampling strategies, processing predetermined visual inputs regardless of the specific reasoning requirements of each question.<n>This static approach limits their ability to adaptively gather visual evidence, leading to suboptimal performance on tasks that require broad temporal coverage or fine-grained spatial detail.<n>We introduce FrameMind, an end-to-end framework trained with reinforcement learning that enables models to dynamically request visual information during reasoning through Frame-Interleaved Chain-of-Thought (FiCOT)<n>Unlike traditional approaches, FrameMind operates in multiple turns where the model alternates between textual reasoning and active visual perception, using tools to extract
arXiv Detail & Related papers (2025-09-28T17:59:43Z) - FOCUS: Unified Vision-Language Modeling for Interactive Editing Driven by Referential Segmentation [55.01077993490845]
Recent Large Vision Language Models (LVLMs) demonstrate promising capabilities in unifying visual understanding and generative modeling.<n>We introduce FOCUS, a unified LVLM that integrates segmentation-aware perception and controllable object-centric generation within an end-to-end framework.
arXiv Detail & Related papers (2025-06-20T07:46:40Z) - Language Models As Semantic Indexers [78.83425357657026]
We introduce LMIndexer, a self-supervised framework to learn semantic IDs with a generative language model.
We show the high quality of the learned IDs and demonstrate their effectiveness on three tasks including recommendation, product search, and document retrieval.
arXiv Detail & Related papers (2023-10-11T18:56:15Z) - Towards Realistic Zero-Shot Classification via Self Structural Semantic
Alignment [53.2701026843921]
Large-scale pre-trained Vision Language Models (VLMs) have proven effective for zero-shot classification.
In this paper, we aim at a more challenging setting, Realistic Zero-Shot Classification, which assumes no annotation but instead a broad vocabulary.
We propose the Self Structural Semantic Alignment (S3A) framework, which extracts structural semantic information from unlabeled data while simultaneously self-learning.
arXiv Detail & Related papers (2023-08-24T17:56:46Z) - Learning Referring Video Object Segmentation from Weak Annotation [78.45828085350936]
Referring video object segmentation (RVOS) is a task that aims to segment the target object in all video frames based on a sentence describing the object.
We propose a new annotation scheme that reduces the annotation effort by 8 times, while providing sufficient supervision for RVOS.
Our scheme only requires a mask for the frame where the object first appears and bounding boxes for the rest of the frames.
arXiv Detail & Related papers (2023-08-04T06:50:52Z) - Acquiring Frame Element Knowledge with Deep Metric Learning for Semantic
Frame Induction [24.486546938073907]
We propose a method that applies deep metric learning to semantic frame induction tasks.
A pre-trained language model is fine-tuned to be suitable for distinguishing frame element roles.
Experimental results on FrameNet demonstrate that our method achieves substantially better performance than existing methods.
arXiv Detail & Related papers (2023-05-23T11:02:28Z) - Semantic Frame Induction with Deep Metric Learning [24.486546938073907]
We propose a model that uses deep metric learning to fine-tune a contextualized embedding model.
We apply the fine-tuned contextualized embeddings to perform semantic frame induction.
arXiv Detail & Related papers (2023-04-27T15:46:09Z) - Knowledge-augmented Frame Semantic Parsing with Hybrid Prompt-tuning [17.6573121083417]
We propose a Knowledge-Augmented Frame Semantic Parsing Architecture (KAF-SPA) to enhance semantic representation.
A Memory-based Knowledge Extraction Module (MKEM) is devised to select accurate frame knowledge and construct the continuous templates.
We also design a Task-oriented Knowledge Probing Module (TKPM) using hybrid prompts to incorporate the selected knowledge into the PLMs and adapt PLMs to the tasks of frame and argument identification.
arXiv Detail & Related papers (2023-03-25T06:41:19Z) - Query Your Model with Definitions in FrameNet: An Effective Method for
Frame Semantic Role Labeling [43.58108941071302]
Frame Semantic Role Labeling (FSRL) identifies arguments and labels them with frame roles defined in FrameNet.
We propose a query-based framework named ArGument Extractor with Definitions in FrameNet (AGED) to mitigate these problems.
arXiv Detail & Related papers (2022-12-05T05:09:12Z) - Guiding the PLMs with Semantic Anchors as Intermediate Supervision:
Towards Interpretable Semantic Parsing [57.11806632758607]
We propose to incorporate the current pretrained language models with a hierarchical decoder network.
By taking the first-principle structures as the semantic anchors, we propose two novel intermediate supervision tasks.
We conduct intensive experiments on several semantic parsing benchmarks and demonstrate that our approach can consistently outperform the baselines.
arXiv Detail & Related papers (2022-10-04T07:27:29Z) - Sister Help: Data Augmentation for Frame-Semantic Role Labeling [9.62264668211579]
We propose a data augmentation approach, which uses existing frame-specific annotation to automatically annotate other lexical units of the same frame which are unannotated.
We present experiments on frame-semantic role labeling which demonstrate the importance of this data augmentation.
arXiv Detail & Related papers (2021-09-16T05:15:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.