Mobile Exergames: Activity Recognition Based on Smartphone Sensors
- URL: http://arxiv.org/abs/2602.00809v1
- Date: Sat, 31 Jan 2026 16:36:33 GMT
- Title: Mobile Exergames: Activity Recognition Based on Smartphone Sensors
- Authors: David Craveiro, Hugo Silva,
- Abstract summary: We propose a proof-of-concept 2D endless game called Duck Catch & Fit.<n>It implements a detailed activity recognition system that uses a smartphone accelerometer, gyroscope, and magnetometer sensors.<n>The results show that it is possible to use machine learning techniques to recognize human activity with high recognition levels.
- Score: 0.2776043688957992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Smartphone sensors can be extremely useful in providing information on the activities and behaviors of persons. Human activity recognition is increasingly used for games, medical, or surveillance. In this paper, we propose a proof-of-concept 2D endless game called Duck Catch & Fit, which implements a detailed activity recognition system that uses a smartphone accelerometer, gyroscope, and magnetometer sensors. The system applies feature extraction and learning mechanism to detect human activities like staying, side movements, and fake side movements. In addition, a voice recognition system is combined to recognize the word "fire" and raise the game's complexity. The results show that it is possible to use machine learning techniques to recognize human activity with high recognition levels. Also, the combination of movement-based and voice-based integrations contributes to a more immersive gameplay.
Related papers
- Ego4o: Egocentric Human Motion Capture and Understanding from Multi-Modal Input [62.51283548975632]
This work focuses on tracking and understanding human motion using consumer wearable devices, such as VR/AR headsets, smart glasses, cellphones, and smartwatches.<n>We present Ego4o (o for omni), a new framework for simultaneous human motion capture and understanding from multi-modal egocentric inputs.
arXiv Detail & Related papers (2025-04-11T11:18:57Z) - Moto: Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos [101.26467307473638]
We introduce Moto, which converts video content into latent Motion Token sequences by a Latent Motion Tokenizer.<n>We pre-train Moto-GPT through motion token autoregression, enabling it to capture diverse visual motion knowledge.<n>To transfer learned motion priors to real robot actions, we implement a co-fine-tuning strategy that seamlessly bridges latent motion token prediction and real robot control.
arXiv Detail & Related papers (2024-12-05T18:57:04Z) - Human Activity Recognition using Smartphones [0.0]
We have created an Android application that recognizes the daily human activities and calculate the calories burnt in real time.
This is used for real-time activity recognition and calculation of calories burnt using a formula based on Metabolic Equivalent.
arXiv Detail & Related papers (2024-04-03T17:05:41Z) - Your Day in Your Pocket: Complex Activity Recognition from Smartphone
Accelerometers [7.335712499936904]
This paper investigates the recognition of complex activities exclusively using smartphone accelerometer data.
We used a large smartphone sensing dataset collected from over 600 users in five countries during the pandemic.
Deep learning-based, binary classification of eight complex activities can be achieved with AUROC scores up to 0.76 with partially personalized models.
arXiv Detail & Related papers (2023-01-17T16:22:30Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - Learning Effect of Lay People in Gesture-Based Locomotion in Virtual
Reality [81.5101473684021]
Some of the most promising methods are gesture-based and do not require additional handheld hardware.
Recent work focused mostly on user preference and performance of the different locomotion techniques.
This work is investigated whether and how quickly users can adapt to a hand gesture-based locomotion system in VR.
arXiv Detail & Related papers (2022-06-16T10:44:16Z) - Play it by Ear: Learning Skills amidst Occlusion through Audio-Visual
Imitation Learning [62.83590925557013]
We learn a set of challenging partially-observed manipulation tasks from visual and audio inputs.
Our proposed system learns these tasks by combining offline imitation learning from tele-operated demonstrations and online finetuning.
In a set of simulated tasks, we find that our system benefits from using audio, and that by using online interventions we are able to improve the success rate of offline imitation learning by 20%.
arXiv Detail & Related papers (2022-05-30T04:52:58Z) - Physical Activity Recognition by Utilising Smartphone Sensor Signals [0.0]
This study collected human activity data from 60 participants across two different days for a total of six activities recorded by gyroscope and accelerometer sensors in a modern smartphone.
The proposed approach achieved a classification accuracy of 98 percent in identifying four different activities.
arXiv Detail & Related papers (2022-01-20T09:58:52Z) - Classifying Human Activities with Inertial Sensors: A Machine Learning
Approach [0.0]
Human Activity Recognition (HAR) is an ongoing research topic.
It has applications in medical support, sports, fitness, social networking, human-computer interfaces, senior care, entertainment, surveillance, and the list goes on.
We examined and analyzed different Machine Learning and Deep Learning approaches for Human Activity Recognition using inertial sensor data of smartphones.
arXiv Detail & Related papers (2021-11-09T08:17:33Z) - Incremental Learning Techniques for Online Human Activity Recognition [0.0]
We propose a human activity recognition (HAR) approach for the online prediction of physical movements.
We develop a HAR system containing monitoring software and a mobile application that collects accelerometer and gyroscope data.
Six incremental learning algorithms are employed and evaluated in this work and compared with several batch learning algorithms commonly used for developing offline HAR systems.
arXiv Detail & Related papers (2021-09-20T11:33:09Z) - From Movement Kinematics to Object Properties: Online Recognition of
Human Carefulness [112.28757246103099]
We show how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object.
We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera.
The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
arXiv Detail & Related papers (2021-09-01T16:03:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.