First International StepUP Competition for Biometric Footstep Recognition: Methods, Results and Remaining Challenges
- URL: http://arxiv.org/abs/2602.11086v1
- Date: Wed, 11 Feb 2026 17:53:46 GMT
- Title: First International StepUP Competition for Biometric Footstep Recognition: Methods, Results and Remaining Challenges
- Authors: Robyn Larracy, Eve MacDonald, Angkoon Phinyomark, Saeid Rezaei, Mahdi Laghaei, Ali Hajighasem, Aaron Tabor, Erik Scheme,
- Abstract summary: First International StepUP Competition for Biometric Footstep Recognition was launched.<n>Competitors were tasked with developing robust recognition models using the StepUP-P150 dataset.<n>The top-performing team, Saeid_UCC, achieved the best equal error rate (EER) of 10.77%.<n>Overall, the competition showcased strong solutions, but persistent challenges in generalizing to unfamiliar footwear highlight a critical area for future work.
- Score: 1.2679045974443117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biometric footstep recognition, based on a person's unique pressure patterns under their feet during walking, is an emerging field with growing applications in security and safety. However, progress in this area has been limited by the lack of large, diverse datasets necessary to address critical challenges such as generalization to new users and robustness to shifts in factors like footwear or walking speed. The recent release of the UNB StepUP-P150 dataset, the largest and most comprehensive collection of high-resolution footstep pressure recordings to date, opens new opportunities for addressing these challenges through deep learning. To mark this milestone, the First International StepUP Competition for Biometric Footstep Recognition was launched. Competitors were tasked with developing robust recognition models using the StepUP-P150 dataset that were then evaluated on a separate, dedicated test set designed to assess verification performance under challenging variations, given limited and relatively homogeneous reference data. The competition attracted global participation, with 23 registered teams from academia and industry. The top-performing team, Saeid_UCC, achieved the best equal error rate (EER) of 10.77% using a generative reward machine (GRM) optimization strategy. Overall, the competition showcased strong solutions, but persistent challenges in generalizing to unfamiliar footwear highlight a critical area for future work.
Related papers
- Human Identification at a Distance: Challenges, Methods and Results on the Competition HID 2025 [70.29305328364755]
The International Competition on Human Identification at a Distance (HID) has been organized annually since 2020.<n>The best-performing method reached 94.2% accuracy, setting a new benchmark on this dataset.<n>We analyze key technical trends and outline potential directions for future research in gait recognition.
arXiv Detail & Related papers (2026-02-07T14:22:17Z) - SVC 2025: the First Multimodal Deception Detection Challenge [16.070848946361696]
The SVC 2025 Multimodal Deception Detection Challenge is a new benchmark designed to evaluate cross-domain generalization in audio-visual deception detection.<n>We aim to foster the development of more adaptable, explainable, and practically deployable deception detection systems.
arXiv Detail & Related papers (2025-08-06T06:56:39Z) - Toward Generalizable Evaluation in the LLM Era: A Survey Beyond Benchmarks [229.73714829399802]
This survey probes the core challenges that the rise of Large Language Models poses for evaluation.<n>We identify and analyze two pivotal transitions: (i) from task-specific to capability-based evaluation, which reorganizes benchmarks around core competencies such as knowledge, reasoning, instruction following, multi-modal understanding, and safety.<n>We will dissect this issue, along with the core challenges of the above two transitions, from the perspectives of methods, datasets, evaluators, and metrics.
arXiv Detail & Related papers (2025-04-26T07:48:52Z) - SynFacePAD 2023: Competition on Face Presentation Attack Detection Based
on Privacy-aware Synthetic Training Data [51.42380508231581]
The paper presents a summary of the Competition on Face Presentation Attack Detection Based on Privacy-aware Synthetic Training Data (SynFacePAD 2023) held at the 2023 International Joint Conference on Biometrics (IJCB 2023)
The competition aimed to motivate and attract solutions that target detecting face presentation attacks while considering synthetic-based training data motivated by privacy, legal and ethical concerns associated with personal data.
The submitted solutions presented innovations and novel approaches that led to outperforming the considered baseline in the investigated benchmarks.
arXiv Detail & Related papers (2023-11-09T13:02:04Z) - Benchmarking Robustness and Generalization in Multi-Agent Systems: A
Case Study on Neural MMO [50.58083807719749]
We present the results of the second Neural MMO challenge, hosted at IJCAI 2022, which received 1600+ submissions.
This competition targets robustness and generalization in multi-agent systems.
We will open-source our benchmark including the environment wrapper, baselines, a visualization tool, and selected policies for further research.
arXiv Detail & Related papers (2023-08-30T07:16:11Z) - The MineRL 2020 Competition on Sample Efficient Reinforcement Learning
using Human Priors [62.9301667732188]
We propose a second iteration of the MineRL Competition.
The primary goal of the competition is to foster the development of algorithms which can efficiently leverage human demonstrations.
The competition is structured into two rounds in which competitors are provided several paired versions of the dataset and environment.
At the end of each round, competitors submit containerized versions of their learning algorithms to the AIcrowd platform.
arXiv Detail & Related papers (2021-01-26T20:32:30Z) - CVPR 2020 Continual Learning in Computer Vision Competition: Approaches,
Results, Current Challenges and Future Directions [25.791936837340877]
The first Continual Learning in Computer Vision challenge held at CVPR in 2020 has been one of the first opportunities to evaluate different continual learning algorithms.
We report the main results of the competition, which counted more than 79 teams registered, 11 finalists and 2300$ in prizes.
arXiv Detail & Related papers (2020-09-14T08:53:05Z) - Analysing Affective Behavior in the First ABAW 2020 Competition [49.90617840789334]
The Affective Behavior Analysis in-the-wild (ABAW) 2020 Competition is the first Competition aiming at automatic analysis of the three main behavior tasks.
We describe this Competition, to be held in conjunction with the IEEE Conference on Face and Gesture Recognition, May 2020, in Buenos Aires, Argentina.
We outline the evaluation metrics, present both the baseline system and the top-3 performing teams' methodologies per Challenge and finally present their obtained results.
arXiv Detail & Related papers (2020-01-30T15:41:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.