Scope Meets Screen: Lessons Learned in Designing Composite Visualizations for Marksmanship Training Across Skill Levels
- URL: http://arxiv.org/abs/2507.00333v1
- Date: Tue, 01 Jul 2025 00:16:41 GMT
- Title: Scope Meets Screen: Lessons Learned in Designing Composite Visualizations for Marksmanship Training Across Skill Levels
- Authors: Emin Zerman, Jonas Carlsson, Mårten Sjöström,
- Abstract summary: We present a shooting visualization system and evaluate its perceived effectiveness for both novice and expert shooters.<n>The insights gained from this design study point to the broader value of integrating first-person video with visual analytics for coaching.
- Score: 3.345437353879255
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Marksmanship practices are required in various professions, including police, military personnel, hunters, as well as sports shooters, such as Olympic shooting, biathlon, and modern pentathlon. The current form of training and coaching is mostly based on repetition, where the coach does not see through the eyes of the shooter, and analysis is limited to stance and accuracy post-session. In this study, we present a shooting visualization system and evaluate its perceived effectiveness for both novice and expert shooters. To achieve this, five composite visualizations were developed using first-person shooting video recordings enriched with overlaid metrics and graphical summaries. These views were evaluated with 10 participants (5 expert marksmen, 5 novices) through a mixed-methods study including shot-count and aiming interpretation tasks, pairwise preference comparisons, and semi-structured interviews. The results show that a dashboard-style composite view, combining raw video with a polar plot and selected graphs, was preferred in 9 of 10 cases and supported understanding across skill levels. The insights gained from this design study point to the broader value of integrating first-person video with visual analytics for coaching, and we suggest directions for applying this approach to other precision-based sports.
Related papers
- Markerless Stride Length estimation in Athletic using Pose Estimation with monocular vision [2.334978724544296]
Performance measures such as stride length in athletics and the pace of runners can be estimated using different tricks.<n>This paper investigates a computer vision-based approach for estimating stride length and speed transition from video sequences.
arXiv Detail & Related papers (2025-07-02T13:37:53Z) - YourSkatingCoach: A Figure Skating Video Benchmark for Fine-Grained Element Analysis [10.444961818248624]
dataset contains 454 videos of jump elements, the detected skater skeletons in each video, along with the gold labels of the start and ending frames of each jump, together as a video benchmark for figure skating.
We propose air time detection, a novel motion analysis task, the goal of which is to accurately detect the duration of the air time of a jump.
To verify the generalizability of the fine-grained labels, we apply the same process to other sports as cross-sports tasks but for coarse-grained task action classification.
arXiv Detail & Related papers (2024-10-27T12:52:28Z) - ExpertAF: Expert Actionable Feedback from Video [81.46431188306397]
We introduce a novel method to generate actionable feedback from video of a person doing a physical activity, such as basketball or soccer.<n>Our method takes a video demonstration and its accompanying 3D body pose and generates expert commentary describing what the person is doing well and what they could improve.<n>We show how to leverage Ego-Exo4D's [29] videos of skilled activity and expert commentary together with a strong language model to create a weakly-supervised training dataset for this task.
arXiv Detail & Related papers (2024-08-01T16:13:07Z) - A Comprehensive Review of Few-shot Action Recognition [64.47305887411275]
Few-shot action recognition aims to address the high cost and impracticality of manually labeling complex and variable video data.<n>It requires accurately classifying human actions in videos using only a few labeled examples per class.<n>Numerous approaches have driven significant advancements in few-shot action recognition.
arXiv Detail & Related papers (2024-07-20T03:53:32Z) - ViSTec: Video Modeling for Sports Technique Recognition and Tactical
Analysis [19.945083591851517]
ViSTec is a Video-based Sports Technique recognition model inspired by human cognition.
Our approach integrates a graph to explicitly model strategic knowledge in stroke sequences and enhance technique recognition with contextual inductive bias.
Case studies with experts from the Chinese national table tennis team validate our model's capacity to automate analysis.
arXiv Detail & Related papers (2024-02-25T02:04:56Z) - Early Action Recognition with Action Prototypes [62.826125870298306]
We propose a novel model that learns a prototypical representation of the full action for each class.
We decompose the video into short clips, where a visual encoder extracts features from each clip independently.
Later, a decoder aggregates together in an online fashion features from all the clips for the final class prediction.
arXiv Detail & Related papers (2023-12-11T18:31:13Z) - Estimation of control area in badminton doubles with pose information
from top and back view drone videos [11.679451300997016]
We present the first annotated drone dataset from top and back views in badminton doubles.
We propose a framework to estimate the control area probability map, which can be used to evaluate teamwork performance.
arXiv Detail & Related papers (2023-05-07T11:18:39Z) - CLIP-ReIdent: Contrastive Training for Player Re-Identification [0.0]
We investigate whether it is possible to transfer the out-standing zero-shot performance of pre-trained CLIP models to the domain of player re-identification.
Unlike previous work, our approach is entirely class-agnostic and benefits from large-scale pre-training.
arXiv Detail & Related papers (2023-03-21T13:55:27Z) - A Survey on Video Action Recognition in Sports: Datasets, Methods and
Applications [60.3327085463545]
We present a survey on video action recognition for sports analytics.
We introduce more than ten types of sports, including team sports, such as football, basketball, volleyball, hockey and individual sports, such as figure skating, gymnastics, table tennis, diving and badminton.
We develop a toolbox using PaddlePaddle, which supports football, basketball, table tennis and figure skating action recognition.
arXiv Detail & Related papers (2022-06-02T13:19:36Z) - MERLOT Reserve: Neural Script Knowledge through Vision and Language and
Sound [90.1857707251566]
We introduce MERLOT Reserve, a model that represents videos jointly over time.
We replace snippets of text and audio with a MASK token; the model learns by choosing the correct masked-out snippet.
Our objective learns faster than alternatives, and performs well at scale.
arXiv Detail & Related papers (2022-01-07T19:00:21Z) - A Unified Framework for Shot Type Classification Based on Subject
Centric Lens [89.26211834443558]
We propose a learning framework for shot type recognition using Subject Guidance Network (SGNet)
SGNet separates the subject and background of a shot into two streams, serving as separate guidance maps for scale and movement type classification respectively.
We build a large-scale dataset MovieShots, which contains 46K shots from 7K movie trailers with annotations of their scale and movement types.
arXiv Detail & Related papers (2020-08-08T15:49:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.