Lutma: a Frame-Making Tool for Collaborative FrameNet Development
- URL: http://arxiv.org/abs/2205.11840v1
- Date: Tue, 24 May 2022 07:04:43 GMT
- Title: Lutma: a Frame-Making Tool for Collaborative FrameNet Development
- Authors: Tiago Timponi Torrent, Arthur Lorenzi, Ely Edison da Silva Matos,
Frederico Belcavello, Marcelo Viridiano, Maucha Andrade Gamonal
- Abstract summary: This paper presents Lutma, a collaborative tool for contributing frames and lexical units to the Global FrameNet initiative.
We argue that this tool will allow for a sensible expansion of FrameNet coverage in terms of both languages and cultural perspectives encoded by them.
- Score: 0.9786690381850356
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents Lutma, a collaborative, semi-constrained, tutorial-based
tool for contributing frames and lexical units to the Global FrameNet
initiative. The tool parameterizes the process of frame creation, avoiding
consistency violations and promoting the integration of frames contributed by
the community with existing frames. Lutma is structured in a wizard-like
fashion so as to provide users with text and video tutorials relevant for each
step in the frame creation process. We argue that this tool will allow for a
sensible expansion of FrameNet coverage in terms of both languages and cultural
perspectives encoded by them, positioning frames as a viable alternative for
representing perspective in language models.
Related papers
- Visual Representation Learning with Stochastic Frame Prediction [90.99577838303297]
This paper revisits the idea of video generation that learns to capture uncertainty in frame prediction.
We design a framework that trains a frame prediction model to learn temporal information between frames.
We find this architecture allows for combining both objectives in a synergistic and compute-efficient manner.
arXiv Detail & Related papers (2024-06-11T16:05:15Z) - VASE: Object-Centric Appearance and Shape Manipulation of Real Videos [108.60416277357712]
In this work, we introduce a framework that is object-centric and is designed to control both the object's appearance and, notably, to execute precise and explicit structural modifications on the object.
We build our framework on a pre-trained image-conditioned diffusion model, integrate layers to handle the temporal dimension, and propose training strategies and architectural modifications to enable shape control.
We evaluate our method on the image-driven video editing task showing similar performance to the state-of-the-art, and showcasing novel shape-editing capabilities.
arXiv Detail & Related papers (2024-01-04T18:59:24Z) - FrameFinder: Explorative Multi-Perspective Framing Extraction from News
Headlines [3.3181276611945263]
We present FrameFinder, an open tool for extracting and analyzing frames in textual data.
By analyzing the well-established gun violence frame corpus, we demonstrate the merits of our proposed solution.
arXiv Detail & Related papers (2023-12-14T14:41:37Z) - Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation [93.18163456287164]
This paper proposes a novel text-guided video-to-video translation framework to adapt image models to videos.
Our framework achieves global style and local texture temporal consistency at a low cost.
arXiv Detail & Related papers (2023-06-13T17:52:23Z) - Acquiring Frame Element Knowledge with Deep Metric Learning for Semantic
Frame Induction [24.486546938073907]
We propose a method that applies deep metric learning to semantic frame induction tasks.
A pre-trained language model is fine-tuned to be suitable for distinguishing frame element roles.
Experimental results on FrameNet demonstrate that our method achieves substantially better performance than existing methods.
arXiv Detail & Related papers (2023-05-23T11:02:28Z) - Meta-Interpolation: Time-Arbitrary Frame Interpolation via Dual
Meta-Learning [65.85319901760478]
We consider processing different time-steps with adaptively generated convolutional kernels in a unified way with the help of meta-learning.
We develop a dual meta-learned frame framework to synthesize intermediate frames with the guidance of context information and optical flow.
arXiv Detail & Related papers (2022-07-27T17:36:23Z) - Frame Shift Prediction [1.4699455652461724]
Frame shift is a cross-linguistic phenomenon in translation which results in corresponding pairs of linguistic material evoking different frames.
The ability to predict frame shifts enables automatic creation of multilingual FrameNets through annotation projection.
arXiv Detail & Related papers (2022-01-05T22:03:06Z) - Sister Help: Data Augmentation for Frame-Semantic Role Labeling [9.62264668211579]
We propose a data augmentation approach, which uses existing frame-specific annotation to automatically annotate other lexical units of the same frame which are unannotated.
We present experiments on frame-semantic role labeling which demonstrate the importance of this data augmentation.
arXiv Detail & Related papers (2021-09-16T05:15:29Z) - EA-Net: Edge-Aware Network for Flow-based Video Frame Interpolation [101.75999290175412]
We propose to reduce the image blur and get the clear shape of objects by preserving the edges in the interpolated frames.
The proposed Edge-Aware Network (EANet) integrates the edge information into the frame task.
Three edge-aware mechanisms are developed to emphasize the frame edges in estimating flow maps.
arXiv Detail & Related papers (2021-05-17T08:44:34Z) - LIFI: Towards Linguistically Informed Frame Interpolation [66.05105400951567]
We try to solve this problem by using several deep learning video generation algorithms to generate missing frames.
We release several datasets to test computer vision video generation models of their speech understanding.
arXiv Detail & Related papers (2020-10-30T05:02:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.