Federated Remote Physiological Measurement with Imperfect Data
- URL: http://arxiv.org/abs/2203.05759v1
- Date: Fri, 11 Mar 2022 05:26:46 GMT
- Title: Federated Remote Physiological Measurement with Imperfect Data
- Authors: Xin Liu, Mingchuan Zhang, Ziheng Jiang, Shwetak Patel, Daniel McDuff
- Abstract summary: Growing need for technology that supports remote healthcare is being highlighted by an aging population and the COVID-19 pandemic.
In health-related machine learning applications the ability to learn predictive models without data leaving a private device is attractive.
Camera-based remote physiological sensing facilitates scalable and low-cost measurement.
- Score: 10.989271258156883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing need for technology that supports remote healthcare is being
acutely highlighted by an aging population and the COVID-19 pandemic. In
health-related machine learning applications the ability to learn predictive
models without data leaving a private device is attractive, especially when
these data might contain features (e.g., photographs or videos of the body)
that make identifying a subject trivial and/or the training data volume is
large (e.g., uncompressed video). Camera-based remote physiological sensing
facilitates scalable and low-cost measurement, but is a prime example of a task
that involves analysing high bit-rate videos containing identifiable images and
sensitive health information. Federated learning enables privacy-preserving
decentralized training which has several properties beneficial for camera-based
sensing. We develop the first mobile federated learning camera-based sensing
system and show that it can perform competitively with traditional
state-of-the-art supervised approaches. However, in the presence of corrupted
data (e.g., video or label noise) from a few devices the performance of weight
averaging quickly degrades. To address this, we leverage knowledge about the
expected noise profile within the video to intelligently adjust how the model
weights are averaged on the server. Our results show that this significantly
improves upon the robustness of models even when the signal-to-noise ratio is
low
Related papers
- XAI-based gait analysis of patients walking with Knee-Ankle-Foot
orthosis using video cameras [1.8749305679160366]
This paper presents a novel system for gait analysis robust to camera movements and providing explanations for its output.
The proposed system employs super-resolution and pose estimation during pre-processing.
It then identifies the seven features - Stride Length, Step Length and Duration of single support of orthotic and non-orthotic leg, Cadence, and Speed.
arXiv Detail & Related papers (2024-02-25T19:05:10Z) - Using Motion Cues to Supervise Single-Frame Body Pose and Shape
Estimation in Low Data Regimes [93.69730589828532]
When enough annotated training data is available, supervised deep-learning algorithms excel at estimating human body pose and shape using a single camera.
We show that, in such cases, easy-to-obtain unannotated videos can be used instead to provide the required supervisory signals.
arXiv Detail & Related papers (2024-02-05T05:37:48Z) - Remote Bio-Sensing: Open Source Benchmark Framework for Fair Evaluation
of rPPG [2.82697733014759]
r (pg photoplethysmography) is a technology that measures and analyzes BVP (Blood Volume Pulse) by using the light absorption characteristics of hemoglobin captured through a camera.
This study is to provide a framework to evaluate various r benchmarking techniques across a wide range of datasets for fair evaluation and comparison.
arXiv Detail & Related papers (2023-07-24T09:35:47Z) - Video object detection for privacy-preserving patient monitoring in
intensive care [0.0]
We propose a new method for exploiting information in the temporal succession of video frames.
Our method outperforms a standard YOLOv5 baseline model by +1.7% mAP@.5 while also training over ten times faster on our proprietary dataset.
arXiv Detail & Related papers (2023-06-26T11:52:22Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Harnessing the Power of Text-image Contrastive Models for Automatic
Detection of Online Misinformation [50.46219766161111]
We develop a self-learning model to explore the constrastive learning in the domain of misinformation identification.
Our model shows the superior performance of non-matched image-text pair detection when the training data is insufficient.
arXiv Detail & Related papers (2023-04-19T02:53:59Z) - Motion Matters: Neural Motion Transfer for Better Camera Physiological
Measurement [25.27559386977351]
Body motion is one of the most significant sources of noise when attempting to recover the subtle cardiac pulse from a video.
We adapt a neural video synthesis approach to augment videos for the task of remote photoplethys.
We demonstrate a 47% improvement over existing inter-dataset results using various state-of-the-art methods.
arXiv Detail & Related papers (2023-03-21T17:51:23Z) - Fast and Robust Video-Based Exercise Classification via Body Pose
Tracking and Scalable Multivariate Time Series Classifiers [13.561233730881279]
We present the application of classifying S&C exercises using video.
We propose an approach named BodyMTS to turn video into time series by employing body pose tracking.
We show that BodyMTS achieves an average accuracy of 87%, which is significantly higher than the accuracy of human domain experts.
arXiv Detail & Related papers (2022-10-02T13:03:38Z) - Differentiable Frequency-based Disentanglement for Aerial Video Action
Recognition [56.91538445510214]
We present a learning algorithm for human activity recognition in videos.
Our approach is designed for UAV videos, which are mainly acquired from obliquely placed dynamic cameras.
We conduct extensive experiments on the UAV Human dataset and the NEC Drone dataset.
arXiv Detail & Related papers (2022-09-15T22:16:52Z) - Robustar: Interactive Toolbox Supporting Precise Data Annotation for
Robust Vision Learning [53.900911121695536]
We introduce the initial release of our software Robustar.
It aims to improve the robustness of vision classification machine learning models through a data-driven perspective.
arXiv Detail & Related papers (2022-07-18T21:12:28Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.