Frame Shift Prediction
- URL: http://arxiv.org/abs/2201.01837v1
- Date: Wed, 5 Jan 2022 22:03:06 GMT
- Title: Frame Shift Prediction
- Authors: Zheng-Xin Yong, Patrick D. Watson, Tiago Timponi Torrent, Oliver
Czulo, Collin F. Baker
- Abstract summary: Frame shift is a cross-linguistic phenomenon in translation which results in corresponding pairs of linguistic material evoking different frames.
The ability to predict frame shifts enables automatic creation of multilingual FrameNets through annotation projection.
- Score: 1.4699455652461724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Frame shift is a cross-linguistic phenomenon in translation which results in
corresponding pairs of linguistic material evoking different frames. The
ability to predict frame shifts enables automatic creation of multilingual
FrameNets through annotation projection. Here, we propose the Frame Shift
Prediction task and demonstrate that graph attention networks, combined with
auxiliary training, can learn cross-linguistic frame-to-frame correspondence
and predict frame shifts.
Related papers
- Visual Representation Learning with Stochastic Frame Prediction [90.99577838303297]
This paper revisits the idea of video generation that learns to capture uncertainty in frame prediction.
We design a framework that trains a frame prediction model to learn temporal information between frames.
We find this architecture allows for combining both objectives in a synergistic and compute-efficient manner.
arXiv Detail & Related papers (2024-06-11T16:05:15Z) - TTVFI: Learning Trajectory-Aware Transformer for Video Frame
Interpolation [50.49396123016185]
Video frame (VFI) aims to synthesize an intermediate frame between two consecutive frames.
We propose a novel Trajectory-aware Transformer for Video Frame Interpolation (TTVFI)
Our method outperforms other state-of-the-art methods in four widely-used VFI benchmarks.
arXiv Detail & Related papers (2022-07-19T03:37:49Z) - Optimizing Video Prediction via Video Frame Interpolation [53.16726447796844]
We present a new optimization framework for video prediction via video frame, inspired by photo-realistic results of video framescapes.
Our framework is based on optimization with a pretrained differentiable video frame module without the need for a training dataset.
Our approach outperforms other video prediction methods that require a large amount of training data or extra semantic information.
arXiv Detail & Related papers (2022-06-27T17:03:46Z) - A Double-Graph Based Framework for Frame Semantic Parsing [23.552054033442545]
Frame semantic parsing is a fundamental NLP task, which consists of three subtasks: frame identification, argument identification and role classification.
Most previous studies tend to neglect relations between different subtasks and arguments and pay little attention to ontological frame knowledge.
In this paper, we propose a Knowledge-guided semanticPK with Double-graph (KID)
Our experiments show KID outperforms the previous state-of-the-art method by up to 1.7 F1-score on two FrameNet datasets.
arXiv Detail & Related papers (2022-06-18T09:39:38Z) - Lutma: a Frame-Making Tool for Collaborative FrameNet Development [0.9786690381850356]
This paper presents Lutma, a collaborative tool for contributing frames and lexical units to the Global FrameNet initiative.
We argue that this tool will allow for a sensible expansion of FrameNet coverage in terms of both languages and cultural perspectives encoded by them.
arXiv Detail & Related papers (2022-05-24T07:04:43Z) - Sister Help: Data Augmentation for Frame-Semantic Role Labeling [9.62264668211579]
We propose a data augmentation approach, which uses existing frame-specific annotation to automatically annotate other lexical units of the same frame which are unannotated.
We present experiments on frame-semantic role labeling which demonstrate the importance of this data augmentation.
arXiv Detail & Related papers (2021-09-16T05:15:29Z) - Learning Semantic-Aware Dynamics for Video Prediction [68.04359321855702]
We propose an architecture and training scheme to predict video frames by explicitly modeling dis-occlusions.
The appearance of the scene is warped from past frames using the predicted motion in co-visible regions.
arXiv Detail & Related papers (2021-04-20T05:00:24Z) - Deep Sketch-guided Cartoon Video Inbetweening [24.00033622396297]
We propose a framework to produce cartoon videos by fetching the color information from two inputs while following the animated motion guided by a user sketch.
By explicitly considering the correspondence between frames and the sketch, we can achieve higher quality results than other image synthesis methods.
arXiv Detail & Related papers (2020-08-10T14:22:04Z) - Image Morphing with Perceptual Constraints and STN Alignment [70.38273150435928]
We propose a conditional GAN morphing framework operating on a pair of input images.
A special training protocol produces sequences of frames, combined with a perceptual similarity loss, promote smooth transformation over time.
We provide comparisons to classic as well as latent space morphing techniques, and demonstrate that, given a set of images for self-supervision, our network learns to generate visually pleasing morphing effects.
arXiv Detail & Related papers (2020-04-29T10:49:10Z) - SF-Net: Single-Frame Supervision for Temporal Action Localization [60.202516362976645]
Single-frame supervision introduces extra temporal action signals while maintaining low annotation overhead.
We propose a unified system called SF-Net to make use of such single-frame supervision.
SF-Net significantly improves upon state-of-the-art weakly-supervised methods in terms of both segment localization and single-frame localization.
arXiv Detail & Related papers (2020-03-15T15:06:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.