A New Perspective for Shuttlecock Hitting Event Detection
- URL: http://arxiv.org/abs/2306.10293v1
- Date: Sat, 17 Jun 2023 08:34:53 GMT
- Title: A New Perspective for Shuttlecock Hitting Event Detection
- Authors: Yu-Hsi Chen
- Abstract summary: This article introduces a novel approach to shuttlecock hitting event detection.
Instead of depending on generic methods, we capture the hitting action of players by reasoning over a sequence of images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article introduces a novel approach to shuttlecock hitting event
detection. Instead of depending on generic methods, we capture the hitting
action of players by reasoning over a sequence of images. To learn the features
of hitting events in a video clip, we specifically utilize a deep learning
model known as SwingNet. This model is designed to capture the relevant
characteristics and patterns associated with the act of hitting in badminton.
By training SwingNet on the provided video clips, we aim to enable the model to
accurately recognize and identify the instances of hitting events based on
their distinctive features. Furthermore, we apply the specific video processing
technique to extract the prior features from the video, which significantly
reduces the learning difficulty for the model. The proposed method not only
provides an intuitive and user-friendly approach but also presents a fresh
perspective on the task of detecting badminton hitting events. The source code
will be available at
https://github.com/TW-yuhsi/A-New-Perspective-for-Shuttlecock-Hitting-Event-Detection.
Related papers
- Unifying Global and Local Scene Entities Modelling for Precise Action Spotting [5.474440128682843]
We introduce a novel approach that analyzes and models scene entities using an adaptive attention mechanism.
Our model has demonstrated outstanding performance, securing the 1st place in the SoccerNet-v2 Action Spotting, FineDiving, and FineGym challenge.
arXiv Detail & Related papers (2024-04-15T17:24:57Z) - An All Deep System for Badminton Game Analysis [0.0874967598360817]
The CoachAI Badminton 2023 Track1 initiative aim to automatically detect events within badminton match videos.
We've implemented various deep learning methods to tackle the problems arising from noisy detectied data.
Our system garnered a score of 0.78 out of 1.0 in the challenge.
arXiv Detail & Related papers (2023-08-24T08:41:40Z) - A Graph-Based Method for Soccer Action Spotting Using Unsupervised
Player Classification [75.93186954061943]
Action spotting involves understanding the dynamics of the game, the complexity of events, and the variation of video sequences.
In this work, we focus on the former by (a) identifying and representing the players, referees, and goalkeepers as nodes in a graph, and by (b) modeling their temporal interactions as sequences of graphs.
For the player identification task, our method obtains an overall performance of 57.83% average-mAP by combining it with other modalities.
arXiv Detail & Related papers (2022-11-22T15:23:53Z) - REST: REtrieve & Self-Train for generative action recognition [54.90704746573636]
We propose to adapt a pre-trained generative Vision & Language (V&L) Foundation Model for video/action recognition.
We show that direct fine-tuning of a generative model to produce action classes suffers from severe overfitting.
We introduce REST, a training framework consisting of two key components.
arXiv Detail & Related papers (2022-09-29T17:57:01Z) - PILED: An Identify-and-Localize Framework for Few-Shot Event Detection [79.66042333016478]
In our study, we employ cloze prompts to elicit event-related knowledge from pretrained language models.
We minimize the number of type-specific parameters, enabling our model to quickly adapt to event detection tasks for new types.
arXiv Detail & Related papers (2022-02-15T18:01:39Z) - ActionCLIP: A New Paradigm for Video Action Recognition [14.961103794667341]
We provide a new perspective on action recognition by attaching importance to the semantic information of label texts.
We propose a new paradigm based on this multimodal learning framework for action recognition, which we dub "pre-train, prompt and fine-tune"
arXiv Detail & Related papers (2021-09-17T11:21:34Z) - Unsupervised Visual Representation Learning by Tracking Patches in Video [88.56860674483752]
We propose to use tracking as a proxy task for a computer vision system to learn the visual representations.
Modelled on the Catch game played by the children, we design a Catch-the-Patch (CtP) game for a 3D-CNN model to learn visual representations.
arXiv Detail & Related papers (2021-05-06T09:46:42Z) - Extensively Matching for Few-shot Learning Event Detection [66.31312496170139]
Event detection models under super-vised learning settings fail to transfer to new event types.
Few-shot learning has not beenexplored in event detection.
We propose two novelloss factors that matching examples in the sup-port set to provide more training signals to themodel.
arXiv Detail & Related papers (2020-06-17T18:30:30Z) - Revisiting Few-shot Activity Detection with Class Similarity Control [107.79338380065286]
We present a framework for few-shot temporal activity detection based on proposal regression.
Our model is end-to-end trainable, takes into account the frame rate differences between few-shot activities and untrimmed test videos, and can benefit from additional few-shot examples.
arXiv Detail & Related papers (2020-03-31T22:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.