GAZED- Gaze-guided Cinematic Editing of Wide-Angle Monocular Video
Recordings
- URL: http://arxiv.org/abs/2010.11886v1
- Date: Thu, 22 Oct 2020 17:27:03 GMT
- Title: GAZED- Gaze-guided Cinematic Editing of Wide-Angle Monocular Video
Recordings
- Authors: K L Bhanu Moorthy, Moneish Kumar, Ramanathan Subramaniam, Vineet
Gandhi
- Abstract summary: We present GAZED- eye GAZe-guided EDiting for videos captured by a solitary, static, wide-angle and high-resolution camera.
Eye-gaze has been effectively employed in computational applications as a cue to capture interesting scene content.
- Score: 6.980491499722598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present GAZED- eye GAZe-guided EDiting for videos captured by a solitary,
static, wide-angle and high-resolution camera. Eye-gaze has been effectively
employed in computational applications as a cue to capture interesting scene
content; we employ gaze as a proxy to select shots for inclusion in the edited
video. Given the original video, scene content and user eye-gaze tracks are
combined to generate an edited video comprising cinematically valid actor shots
and shot transitions to generate an aesthetic and vivid representation of the
original narrative. We model cinematic video editing as an energy minimization
problem over shot selection, whose constraints capture cinematographic editing
conventions. Gazed scene locations primarily determine the shots constituting
the edited video. Effectiveness of GAZED against multiple competing methods is
demonstrated via a psychophysical study involving 12 users and twelve
performance videos.
Related papers
- DeCo: Decoupled Human-Centered Diffusion Video Editing with Motion Consistency [66.49423641279374]
We introduce DeCo, a novel video editing framework specifically designed to treat humans and the background as separate editable targets.
We propose a decoupled dynamic human representation that utilizes a human body prior to generate tailored humans.
We extend the calculation of score distillation sampling into normal space and image space to enhance the texture of humans during the optimization.
arXiv Detail & Related papers (2024-08-14T11:53:40Z) - ReVideo: Remake a Video with Motion and Content Control [67.5923127902463]
We present a novel attempt to Remake a Video (VideoRe) which allows precise video editing in specific areas through the specification of both content and motion.
VideoRe addresses a new task involving the coupling and training imbalance between content and motion control.
Our method can also seamlessly extend these applications to multi-area editing without modifying specific training, demonstrating its flexibility and robustness.
arXiv Detail & Related papers (2024-05-22T17:46:08Z) - DreamMotion: Space-Time Self-Similar Score Distillation for Zero-Shot Video Editing [48.238213651343784]
Video score distillation can introduce new content indicated by target text, but can also cause structure and motion deviation.
We propose to match space-time self-similarities of the original video and the edited video during the score distillation.
Our approach is model-agnostic, which can be applied for both cascaded and non-cascaded video diffusion frameworks.
arXiv Detail & Related papers (2024-03-18T17:38:53Z) - Action Reimagined: Text-to-Pose Video Editing for Dynamic Human Actions [49.14827857853878]
ReimaginedAct comprises video understanding, reasoning, and editing modules.
Our method can accept not only direct instructional text prompts but also what if' questions to predict possible action changes.
arXiv Detail & Related papers (2024-03-11T22:46:46Z) - UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing [28.140945021777878]
We present UniEdit, a tuning-free framework that supports both video motion and appearance editing.
To realize motion editing while preserving source video content, we introduce auxiliary motion-reference and reconstruction branches.
The obtained features are then injected into the main editing path via temporal and spatial self-attention layers.
arXiv Detail & Related papers (2024-02-20T17:52:12Z) - SAVE: Protagonist Diversification with Structure Agnostic Video Editing [29.693364686494274]
Previous works usually work well on trivial and consistent shapes, and easily collapse on a difficult target that has a largely different body shape from the original one.
We propose motion personalization that isolates the motion from a single source video and then modifies the protagonist accordingly.
We also regulate the motion word to attend to proper motion-related areas by introducing a novel pseudo optical flow.
arXiv Detail & Related papers (2023-12-05T05:13:20Z) - Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts [116.05656635044357]
We propose a generic video editing framework called Make-A-Protagonist.
Specifically, we leverage multiple experts to parse source video, target visual and textual clues, and propose a visual-textual-based video generation model.
Results demonstrate the versatile and remarkable editing capabilities of Make-A-Protagonist.
arXiv Detail & Related papers (2023-05-15T17:59:03Z) - The Anatomy of Video Editing: A Dataset and Benchmark Suite for
AI-Assisted Video Editing [90.59584961661345]
This work introduces the Anatomy of Video Editing, a dataset, and benchmark, to foster research in AI-assisted video editing.
Our benchmark suite focuses on video editing tasks, beyond visual effects, such as automatic footage organization and assisted video assembling.
To enable research on these fronts, we annotate more than 1.5M tags, with relevant concepts to cinematography, from 196176 shots sampled from movie scenes.
arXiv Detail & Related papers (2022-07-20T10:53:48Z) - Automatic Non-Linear Video Editing Transfer [7.659780589300858]
We propose an automatic approach that extracts editing styles in a source video and applies the edits to matched footage for video creation.
Our Computer Vision based techniques considers framing, content type, playback speed, and lighting of each input video segment.
arXiv Detail & Related papers (2021-05-14T17:52:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.