Spatiotemporal Motion Synchronization for Snowboard Big Air
- URL: http://arxiv.org/abs/2112.10909v1
- Date: Mon, 20 Dec 2021 23:30:33 GMT
- Title: Spatiotemporal Motion Synchronization for Snowboard Big Air
- Authors: Seiji Matsumura, Dan Mikami, Naoki Saijo, Makio Kashino
- Abstract summary: We propose a conventional but plausible solution using existing image processing techniques for snowboard big air training.
We conducted interviews with expert snowboarders who stated that thetemporally aligned videos enabled them to precisely identify slight differences in body movements.
- Score: 6.054558868204333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: During the training for snowboard big air, one of the most popular winter
sports, athletes and coaches extensively shoot and check their jump attempts
using a single camera or smartphone. However, by watching videos sequentially,
it is difficult to compare the precise difference in performance between two
trials. Therefore, side-by-side display or overlay of two videos may be helpful
for training. To accomplish this, the spatial and temporal alignment of
multiple performances must be ensured. In this study, we propose a conventional
but plausible solution using the existing image processing techniques for
snowboard big air training. We conducted interviews with expert snowboarders
who stated that the spatiotemporally aligned videos enabled them to precisely
identify slight differences in body movements. The results suggest that the
proposed method can be used during the training of snowboard big air.
Related papers
- Controllable Weather Synthesis and Removal with Video Diffusion Models [61.56193902622901]
WeatherWeaver is a video diffusion model that synthesizes diverse weather effects directly into any input video.
Our model provides precise control over weather effect intensity and supports blending various weather types, ensuring both realism and adaptability.
arXiv Detail & Related papers (2025-05-01T17:59:57Z) - Analyzing Swimming Performance Using Drone Captured Aerial Videos [6.431314461860605]
This paper presents a novel approach for tracking swimmers using a moving UAV.
The proposed system employs a UAV equipped with a high-resolution camera to capture aerial footage of the swimmers.
The footage is then processed using computer vision algorithms to extract the swimmers' positions and movements.
arXiv Detail & Related papers (2025-03-17T09:38:44Z) - MegaSaM: Accurate, Fast, and Robust Structure and Motion from Casual Dynamic Videos [104.1338295060383]
We present a system that allows for accurate, fast, and robust estimation of camera parameters and depth maps from casual monocular videos of dynamic scenes.
Our system is significantly more accurate and robust at camera pose and depth estimation when compared with prior and concurrent work.
arXiv Detail & Related papers (2024-12-05T18:59:42Z) - ExpertAF: Expert Actionable Feedback from Video [81.46431188306397]
We introduce a novel method to generate actionable feedback from video of a person doing a physical activity.
Our method takes a video demonstration and its accompanying 3D body pose and generates expert commentary.
Our method is able to reason across multi-modal input combinations to output full-spectrum, actionable coaching.
arXiv Detail & Related papers (2024-08-01T16:13:07Z) - Investigating Event-Based Cameras for Video Frame Interpolation in Sports [59.755469098797406]
We present a first investigation of event-based Video Frame Interpolation (VFI) models for generating sports slow-motion videos.
Particularly, we design and implement a bi-camera recording setup, including an RGB and an event-based camera to capture sports videos, to temporally align and spatially register both cameras.
Our experimental validation demonstrates that TimeLens, an off-the-shelf event-based VFI model, can effectively generate slow-motion footage for sports videos.
arXiv Detail & Related papers (2024-07-02T15:39:08Z) - Animate Your Motion: Turning Still Images into Dynamic Videos [58.63109848837741]
We introduce Scene and Motion Conditional Diffusion (SMCD), a novel methodology for managing multimodal inputs.
SMCD incorporates a recognized motion conditioning module and investigates various approaches to integrate scene conditions.
Our design significantly enhances video quality, motion precision, and semantic coherence.
arXiv Detail & Related papers (2024-03-15T10:36:24Z) - Visualizing Skiers' Trajectories in Monocular Videos [14.606629147104595]
We propose SkiTraVis, an algorithm to visualize the sequence of points traversed by a skier during its performance.
We performed experiments on videos of real-world professional competitions to quantify the visualization error, the computational efficiency, as well as the applicability.
arXiv Detail & Related papers (2023-04-06T11:06:37Z) - Tubelet-Contrastive Self-Supervision for Video-Efficient Generalization [23.245275661852446]
We propose a self-supervised method for learning motion-focused video representations.
We learn similarities between videos with identical local motion dynamics but an otherwise different appearance.
Our approach maintains performance when using only 25% of the pretraining videos.
arXiv Detail & Related papers (2023-03-20T10:31:35Z) - Video-Based Reconstruction of the Trajectories Performed by Skiers [14.572756832049285]
We propose a video-based approach to reconstruct the sequence of points traversed by an athlete during its performance.
Our prototype is constituted by a pipeline of deep learning-based algorithms to reconstruct the athlete's motion and to visualize it according to the camera perspective.
arXiv Detail & Related papers (2021-12-17T17:40:06Z) - Learning to Run with Potential-Based Reward Shaping and Demonstrations
from Video Data [70.540936204654]
"Learning to run" competition was to train a two-legged model of a humanoid body to run in a simulated race course with maximum speed.
All submissions took a tabula rasa approach to reinforcement learning (RL) and were able to produce relatively fast, but not optimal running behaviour.
We demonstrate how data from videos of human running can be used to shape the reward of the humanoid learning agent.
arXiv Detail & Related papers (2020-12-16T09:46:58Z) - Do You See What I See? Coordinating Multiple Aerial Cameras for Robot
Cinematography [9.870369982132678]
We develop a real-time multi-UAV coordination system that is capable of recording dynamic targets while maximizing shot diversity and avoiding collisions.
We show that our coordination scheme has low computational cost and takes only 1.17 ms on average to plan for a team of 3 UAVs over a 10 s time horizon.
arXiv Detail & Related papers (2020-11-10T22:43:25Z) - RSPNet: Relative Speed Perception for Unsupervised Video Representation
Learning [100.76672109782815]
We study unsupervised video representation learning that seeks to learn both motion and appearance features from unlabeled video only.
It is difficult to construct a suitable self-supervised task to well model both motion and appearance features.
We propose a new way to perceive the playback speed and exploit the relative speed between two video clips as labels.
arXiv Detail & Related papers (2020-10-27T16:42:50Z) - AIM 2019 Challenge on Video Temporal Super-Resolution: Methods and
Results [129.15554076593762]
This paper reviews the first AIM challenge on video temporal super-resolution (frame)
From low-frame-rate (15 fps) video sequences, the challenge participants are asked to submit higher-framerate (60 fps) video sequences.
We employ the REDS VTSR dataset derived from diverse videos captured in a hand-held camera for training and evaluation purposes.
arXiv Detail & Related papers (2020-05-04T01:51:23Z) - Multimodal and multiview distillation for real-time player detection on
a football field [31.355119048749618]
We develop a system that detects players from a unique cheap and wide-angle fisheye camera assisted by a single narrow-angle thermal camera.
We show that our solution is effective in detecting players on the whole field filmed by the fisheye camera.
arXiv Detail & Related papers (2020-04-16T09:16:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.