Image Processing for Motion Magnification
- URL: http://arxiv.org/abs/2411.09555v1
- Date: Thu, 14 Nov 2024 16:07:04 GMT
- Title: Image Processing for Motion Magnification
- Authors: Nadaniela Egidi, Josephin Giacomini, Paolo Leonesi, Pierluigi Maponi, Federico Mearelli, Edin Trebovic,
- Abstract summary: Motion Magnification (MM) is a collection of relative recent techniques within the realm of Image Processing.
We propose a numerical technique using the Phase-Based Motion Magnification which analyses the video sequence in the Fourier Domain.
We present preliminary experiments, focusing on some basic test made up using synthetic images.
- Score: 0.0
- License:
- Abstract: Motion Magnification (MM) is a collection of relative recent techniques within the realm of Image Processing. The main motivation of introducing these techniques in to support the human visual system to capture relevant displacements of an object of interest; these motions can be in object color and in object location. In fact, the goal is to opportunely process a video sequence to obtain as output a new video in which motions are magnified and visible to the viewer. We propose a numerical technique using the Phase-Based Motion Magnification which analyses the video sequence in the Fourier Domain and rely on the Fourier Shifting Property. We describe the mathematical foundation of this method and the corresponding implementation in a numerical algorithm. We present preliminary experiments, focusing on some basic test made up using synthetic images.
Related papers
- Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - Generative Image Dynamics [80.70729090482575]
We present an approach to modeling an image-space prior on scene motion.
Our prior is learned from a collection of motion trajectories extracted from real video sequences.
arXiv Detail & Related papers (2023-09-14T17:54:01Z) - MovingParts: Motion-based 3D Part Discovery in Dynamic Radiance Field [42.236015785792965]
We present MovingParts, a NeRF-based method for dynamic scene reconstruction and part discovery.
Under the Lagrangian view, we parameterize the scene motion by tracking the trajectory of particles on objects.
The Lagrangian view makes it convenient to discover parts by factorizing the scene motion as a composition of part-level rigid motions.
arXiv Detail & Related papers (2023-03-10T05:06:30Z) - AIMusicGuru: Music Assisted Human Pose Correction [8.020211030279686]
We present a method that leverages our understanding of the high degree of a causal relationship between the sound produced and the motion that produces them.
We use the audio signature to refine and predict accurate human body pose motion models.
We also open-source MAPdat, a new multi-modal dataset of 3D violin playing motion with music.
arXiv Detail & Related papers (2022-03-24T03:16:42Z) - Render In-between: Motion Guided Video Synthesis for Action
Interpolation [53.43607872972194]
We propose a motion-guided frame-upsampling framework that is capable of producing realistic human motion and appearance.
A novel motion model is trained to inference the non-linear skeletal motion between frames by leveraging a large-scale motion-capture dataset.
Our pipeline only requires low-frame-rate videos and unpaired human motion data but does not require high-frame-rate videos for training.
arXiv Detail & Related papers (2021-11-01T15:32:51Z) - NeuralDiff: Segmenting 3D objects that move in egocentric videos [92.95176458079047]
We study the problem of decomposing the observed 3D scene into a static background and a dynamic foreground.
This task is reminiscent of the classic background subtraction problem, but is significantly harder because all parts of the scene, static and dynamic, generate a large apparent motion.
In particular, we consider egocentric videos and further separate the dynamic component into objects and the actor that observes and moves them.
arXiv Detail & Related papers (2021-10-19T12:51:35Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Animating Pictures with Eulerian Motion Fields [90.30598913855216]
We show a fully automatic method for converting a still image into a realistic animated looping video.
We target scenes with continuous fluid motion, such as flowing water and billowing smoke.
We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.
arXiv Detail & Related papers (2020-11-30T18:59:06Z) - MotionSqueeze: Neural Motion Feature Learning for Video Understanding [46.82376603090792]
Motion plays a crucial role in understanding videos and most state-of-the-art neural models for video classification incorporate motion information.
In this work, we replace external and heavy computation of optical flows with internal and light-weight learning of motion features.
We demonstrate that the proposed method provides a significant gain on four standard benchmarks for action recognition with only a small amount of additional cost.
arXiv Detail & Related papers (2020-07-20T08:30:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.