FedFitTech: A Baseline in Federated Learning for Fitness Tracking
- URL: http://arxiv.org/abs/2506.16840v1
- Date: Fri, 20 Jun 2025 08:43:39 GMT
- Title: FedFitTech: A Baseline in Federated Learning for Fitness Tracking
- Authors: Zeyneddin Oz, Shreyas Korde, Marius Bock, Kristof Van Laerhoven,
- Abstract summary: We present the FedFitTech baseline, which is publicly available and widely used by both industry and academic researchers.<n>This paper presents a case study that implements a system based on the FedFitTech baseline, incorporating a client-side early stopping strategy.<n>Results show that this reduces overall redundant communications by 13 percent, while maintaining the overall recognition performance at a negligible recognition cost by 1 percent.
- Score: 6.3897633754578536
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Rapid evolution of sensors and resource-efficient machine learning models have spurred the widespread adoption of wearable fitness tracking devices. Equipped with inertial sensors, such devices can continuously capture physical movements for fitness technology (FitTech), enabling applications from sports optimization to preventive healthcare. Traditional centralized learning approaches to detect fitness activities struggle with privacy concerns, regulatory constraints, and communication inefficiencies. In contrast, Federated Learning (FL) enables a decentralized model training by communicating model updates rather than private wearable sensor data. Applying FL to FitTech presents unique challenges, such as data imbalance, lack of labelled data, heterogeneous user activity patterns, and trade-offs between personalization and generalization. To simplify research on FitTech in FL, we present the FedFitTech baseline, under the Flower framework, which is publicly available and widely used by both industry and academic researchers. Additionally, to illustrate its usage, this paper presents a case study that implements a system based on the FedFitTech baseline, incorporating a client-side early stopping strategy and comparing the results. For instance, this system allows wearable devices to optimize the trade-off between capturing common fitness activity patterns and preserving individuals' nuances, thereby enhancing both the scalability and efficiency of privacy-aware fitness tracking applications. Results show that this reduces overall redundant communications by 13 percent, while maintaining the overall recognition performance at a negligible recognition cost by 1 percent. Thus, FedFitTech baseline creates a foundation for a wide range of new research and development opportunities in FitTech, and it is available as open-source at: https://github.com/adap/flower/tree/main/baselines/fedfittech
Related papers
- Learn More by Using Less: Distributed Learning with Energy-Constrained Devices [3.730504020733928]
Federated Learning (FL) has emerged as a solution for distributed model training across decentralized, privacy-preserving devices.<n>We propose LeanFed, an energy-aware FL framework designed to optimize client selection and training workloads on battery-constrained devices.
arXiv Detail & Related papers (2024-12-03T09:06:57Z) - Personalized Wireless Federated Learning for Large Language Models [75.22457544349668]
Large language models (LLMs) have driven profound transformations in wireless networks.<n>Within wireless environments, the training of LLMs faces significant challenges related to security and privacy.<n>This paper presents a systematic analysis of the training stages of LLMs in wireless networks, including pre-training, instruction tuning, and alignment tuning.
arXiv Detail & Related papers (2024-04-20T02:30:21Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Generalization of Fitness Exercise Recognition from Doppler Measurements
by Domain-adaption and Few-Shot Learning [12.238586191793997]
In previous works, a mobile application was developed using an unmodified commercial off-the-shelf smartphone to recognize whole-body exercises.
Applying such a lab-environment trained model on realistic application variations causes a significant drop in performance.
This paper presents a database with controlled and uncontrolled subsets of fitness exercises.
arXiv Detail & Related papers (2023-11-20T16:40:48Z) - FS-Real: Towards Real-World Cross-Device Federated Learning [60.91678132132229]
Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data.
There is still a considerable gap between the flourishing FL research and real-world scenarios, mainly caused by the characteristics of heterogeneous devices and its scales.
We propose an efficient and scalable prototyping system for real-world cross-device FL, FS-Real.
arXiv Detail & Related papers (2023-03-23T15:37:17Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Efficient Personalized Learning for Wearable Health Applications using
HyperDimensional Computing [10.89988703152759]
Hyperdimensional computing (HDC) offers a well-suited on-device learning solution for resource-constrained devices.
Our system improves the energy efficiency of training by up to $45.8times$ compared with the state-of-the-art Deep Neural Network (DNN) algorithms.
arXiv Detail & Related papers (2022-08-01T18:49:15Z) - Privacy-Preserving Personalized Fitness Recommender System (P3FitRec): A
Multi-level Deep Learning Approach [6.647564421295215]
We propose a novel privacy-aware personalized fitness recommender system.
We introduce a multi-level deep learning framework that learns important features from a large-scale real fitness dataset.
Our approach achieves personalization by inferring the fitness characteristics of users from sensory data.
arXiv Detail & Related papers (2022-03-23T05:27:35Z) - An adaptable cognitive microcontroller node for fitness activity
recognition [0.0]
Wobble boards are low-cost equipment that can be used for sensorimotor training to avoid ankle injuries or as part of the rehabilitation process after an injury.
In this work, we present a portable and battery-powered microcontroller-based device applicable to a wobble board.
To reduce power consumption, we add an adaptivity layer that dynamically manages the device's hardware and software configuration to adapt it to the required operating mode at runtime.
arXiv Detail & Related papers (2022-01-13T18:06:38Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.