BlurBall: Joint Ball and Motion Blur Estimation for Table Tennis Ball Tracking
- URL: http://arxiv.org/abs/2509.18387v1
- Date: Mon, 22 Sep 2025 20:16:50 GMT
- Title: BlurBall: Joint Ball and Motion Blur Estimation for Table Tennis Ball Tracking
- Authors: Thomas Gossard, Filip Radovic, Andreas Ziegler, Andrea Zell,
- Abstract summary: Motion blur reduces the clarity of fast-moving objects, posing challenges for detection systems.<n>This paper introduces a new labeling strategy that places the ball at the center of the blur streak and explicitly annotates blur attributes.<n>We also introduce BlurBall, a model that jointly estimates ball position and motion blur attributes.
- Score: 7.933039558471408
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Motion blur reduces the clarity of fast-moving objects, posing challenges for detection systems, especially in racket sports, where balls often appear as streaks rather than distinct points. Existing labeling conventions mark the ball at the leading edge of the blur, introducing asymmetry and ignoring valuable motion cues correlated with velocity. This paper introduces a new labeling strategy that places the ball at the center of the blur streak and explicitly annotates blur attributes. Using this convention, we release a new table tennis ball detection dataset. We demonstrate that this labeling approach consistently enhances detection performance across various models. Furthermore, we introduce BlurBall, a model that jointly estimates ball position and motion blur attributes. By incorporating attention mechanisms such as Squeeze-and-Excitation over multi-frame inputs, we achieve state-of-the-art results in ball detection. Leveraging blur not only improves detection accuracy but also enables more reliable trajectory prediction, benefiting real-time sports analytics.
Related papers
- CourtMotion: Learning Event-Driven Motion Representations from Skeletal Data for Basketball [45.88028371034407]
CourtMotion is atemporal modeling framework for analyzing and predicting game events and plays in professional basketball.<n>Our two-stage approach first processes skeletal tracking data through Graph Neural Networks to capture nuanced motion patterns.<n>We introduce event projection heads that explicitly connect player movements to basketball events like passes, shots, and steals, training the model to associate physical motion patterns with their purposes.
arXiv Detail & Related papers (2025-12-01T09:58:24Z) - Automated Tennis Player and Ball Tracking with Court Keypoints Detection (Hawk Eye System) [0.0]
This study presents a complete pipeline for automated tennis match analysis.<n>Our framework integrates multiple deep learning models to detect and track players and the tennis ball in real time.<n>The model outputs an annotated video along with detailed performance metrics, enabling coaches, broadcasters, and players to gain actionable insights into the dynamics of the game.
arXiv Detail & Related papers (2025-11-06T07:18:54Z) - Egocentric Event-Based Vision for Ping Pong Ball Trajectory Prediction [17.147140984254655]
We present a real-time egocentric trajectory prediction system for table tennis using event cameras.<n>We collect a dataset of ping-pong game sequences, including 3D ground-truth trajectories of the ball, synchronized with sensor data from the Meta Project Aria glasses.<n>Our detection pipeline has a worst-case total latency of 4.5 ms, including computation and perception.
arXiv Detail & Related papers (2025-06-09T15:22:55Z) - MATE: Motion-Augmented Temporal Consistency for Event-based Point Tracking [58.719310295870024]
This paper presents an event-based framework for tracking any point.<n>To resolve ambiguities caused by event sparsity, a motion-guidance module incorporates kinematic vectors into the local matching process.<n>The method improves the $Survival_50$ metric by 17.9% over event-only tracking of any point baseline.
arXiv Detail & Related papers (2024-12-02T09:13:29Z) - Walker: Self-supervised Multiple Object Tracking by Walking on Temporal Appearance Graphs [117.67620297750685]
We introduce Walker, the first self-supervised tracker that learns from videos with sparse bounding box annotations, and no tracking labels.
Walker is the first self-supervised tracker to achieve competitive performance on MOT17, DanceTrack, and BDD100K.
arXiv Detail & Related papers (2024-09-25T18:00:00Z) - TrackNetV4: Enhancing Fast Sports Object Tracking with Motion Attention Maps [6.548400020461624]
We introduce an enhancement to the TrackNet family by fusing high-level visual features with learnable motion attention maps.
Our approach leverages frame differencing maps, modulated by a motion prompt layer, to highlight key motion regions over time.
We refer to our lightweight, plug-and-play solution, built on top of the existing TrackNet, as TrackNetV4.
arXiv Detail & Related papers (2024-09-22T17:58:09Z) - Temporal Correlation Meets Embedding: Towards a 2nd Generation of JDE-based Real-Time Multi-Object Tracking [52.04679257903805]
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks.
Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks.
arXiv Detail & Related papers (2024-07-19T07:48:45Z) - A Badminton Recognition and Tracking System Based on Context
Multi-feature Fusion [6.068573093901329]
Two trajectory clip trackers are designed based on different rules to capture the correct trajectory of the ball.
Two rounds of detection from coarse-grained to fine-grained are used to solve the challenges encountered in badminton detection.
arXiv Detail & Related papers (2023-06-26T08:07:56Z) - Table Tennis Stroke Detection and Recognition Using Ball Trajectory Data [5.735035463793008]
A single camera setup positioned in the umpire's view has been employed to procure a dataset consisting of six stroke classes executed by four professional table tennis players.
Ball tracking using YOLOv4, a traditional object detection model, and TrackNetv2, a temporal heatmap based model, have been implemented on our dataset.
A mathematical approach developed to extract temporal boundaries of strokes using the ball trajectory data yielded a total of 2023 valid strokes.
The temporal convolutional network developed performed stroke recognition on completely unseen data with an accuracy of 87.155%.
arXiv Detail & Related papers (2023-02-19T19:13:24Z) - P2ANet: A Dataset and Benchmark for Dense Action Detection from Table Tennis Match Broadcasting Videos [64.57435509822416]
This work consists of 2,721 video clips collected from the broadcasting videos of professional table tennis matches in World Table Tennis Championships and Olympiads.
We formulate two sets of action detection problems -- emphaction localization and emphaction recognition.
The results confirm that TheName is still a challenging task and can be used as a special benchmark for dense action detection from videos.
arXiv Detail & Related papers (2022-07-26T08:34:17Z) - Ball 3D localization from a single calibrated image [1.2891210250935146]
We propose to address the task on a single image by estimating ball diameter in pixels and use the knowledge of real ball diameter in meters.
This approach is suitable for any game situation where the ball is (even partly) visible.
validations on 3 basketball datasets reveals that our model gives remarkable predictions on ball 3D localization.
arXiv Detail & Related papers (2022-03-30T19:38:14Z) - ArTIST: Autoregressive Trajectory Inpainting and Scoring for Tracking [80.02322563402758]
One of the core components in online multiple object tracking (MOT) frameworks is associating new detections with existing tracklets.
We introduce a probabilistic autoregressive generative model to score tracklet proposals by directly measuring the likelihood that a tracklet represents natural motion.
arXiv Detail & Related papers (2020-04-16T06:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.