More to Less (M2L): Enhanced Health Recognition in the Wild with Reduced
Modality of Wearable Sensors
- URL: http://arxiv.org/abs/2202.08267v1
- Date: Wed, 16 Feb 2022 18:23:29 GMT
- Title: More to Less (M2L): Enhanced Health Recognition in the Wild with Reduced
Modality of Wearable Sensors
- Authors: Huiyuan Yang, Han Yu, Kusha Sridhar, Thomas Vaessen, Inez Myin-Germeys
and Akane Sano
- Abstract summary: Fusing multiple sensors is a common scenario in many applications, but may not always be feasible in real-world scenarios.
We propose an effective more to less (M2L) learning framework to improve testing performance with reduced sensors.
- Score: 18.947172818861773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurately recognizing health-related conditions from wearable data is
crucial for improved healthcare outcomes. To improve the recognition accuracy,
various approaches have focused on how to effectively fuse information from
multiple sensors. Fusing multiple sensors is a common scenario in many
applications, but may not always be feasible in real-world scenarios. For
example, although combining bio-signals from multiple sensors (i.e., a chest
pad sensor and a wrist wearable sensor) has been proved effective for improved
performance, wearing multiple devices might be impractical in the free-living
context. To solve the challenges, we propose an effective more to less (M2L)
learning framework to improve testing performance with reduced sensors through
leveraging the complementary information of multiple modalities during
training. More specifically, different sensors may carry different but
complementary information, and our model is designed to enforce collaborations
among different modalities, where positive knowledge transfer is encouraged and
negative knowledge transfer is suppressed, so that better representation is
learned for individual modalities. Our experimental results show that our
framework achieves comparable performance when compared with the full
modalities. Our code and results will be available at
https://github.com/compwell-org/More2Less.git.
Related papers
- Condition-Aware Multimodal Fusion for Robust Semantic Perception of Driving Scenes [56.52618054240197]
We propose a novel, condition-aware multimodal fusion approach for robust semantic perception of driving scenes.
Our method, CAFuser, uses an RGB camera input to classify environmental conditions and generate a Condition Token that guides the fusion of multiple sensor modalities.
We set the new state of the art with CAFuser on the MUSES dataset with 59.7 PQ for multimodal panoptic segmentation and 78.2 mIoU for semantic segmentation, ranking first on the public benchmarks.
arXiv Detail & Related papers (2024-10-14T17:56:20Z) - Virtual Fusion with Contrastive Learning for Single Sensor-based
Activity Recognition [5.225544155289783]
Various types of sensors can be used for Human Activity Recognition (HAR)
Sometimes a single sensor cannot fully observe the user's motions from its perspective, which causes wrong predictions.
We propose Virtual Fusion - a new method that takes advantage of unlabeled data from multiple time-synchronized sensors during training, but only needs one sensor for inference.
arXiv Detail & Related papers (2023-12-01T17:03:27Z) - Contrastive Left-Right Wearable Sensors (IMUs) Consistency Matching for
HAR [0.0]
We show how real data can be used for self-supervised learning without any transformations.
Our approach involves contrastive matching of two different sensors.
We test our approach on the Opportunity and MM-Fit datasets.
arXiv Detail & Related papers (2023-11-21T15:31:16Z) - Log-Likelihood Score Level Fusion for Improved Cross-Sensor Smartphone
Periocular Recognition [52.15994166413364]
We employ fusion of several comparators to improve periocular performance when images from different smartphones are compared.
We use a probabilistic fusion framework based on linear logistic regression, in which fused scores tend to be log-likelihood ratios.
Our framework also provides an elegant and simple solution to handle signals from different devices, since same-sensor and cross-sensor score distributions are aligned and mapped to a common probabilistic domain.
arXiv Detail & Related papers (2023-11-02T13:43:44Z) - Multi-unit soft sensing permits few-shot learning [0.0]
A performance gain is generally attained when knowledge is transferred among strongly related soft sensor learning tasks.
A particularly relevant case for transferability is when developing soft sensors of the same type for similar, but physically different processes or units.
Applying methods that exploit transferability in this setting leads to what we call multi-unit soft sensing.
arXiv Detail & Related papers (2023-09-27T17:50:05Z) - Machine Learning Based Compensation for Inconsistencies in Knitted Force
Sensors [1.0742675209112622]
Knitted sensors frequently suffer from inconsistencies due to innate effects such as offset, relaxation, and drift.
In this paper, we demonstrate a method for counteracting this by applying processing using a minimal artificial neural network (ANN)
By training a three-layer ANN with a total of 8 neurons, we manage to significantly improve the mapping between sensor reading and actuation force.
arXiv Detail & Related papers (2023-06-21T09:19:33Z) - Unsupervised Statistical Feature-Guided Diffusion Model for Sensor-based Human Activity Recognition [3.2319909486685354]
A key problem holding up progress in wearable sensor-based human activity recognition is the unavailability of diverse and labeled training data.
We propose an unsupervised statistical feature-guided diffusion model specifically optimized for wearable sensor-based human activity recognition.
By conditioning the diffusion model on statistical information such as mean, standard deviation, Z-score, and skewness, we generate diverse and representative synthetic sensor data.
arXiv Detail & Related papers (2023-05-30T15:12:59Z) - Learning Online Multi-Sensor Depth Fusion [100.84519175539378]
SenFuNet is a depth fusion approach that learns sensor-specific noise and outlier statistics.
We conduct experiments with various sensor combinations on the real-world CoRBS and Scene3D datasets.
arXiv Detail & Related papers (2022-04-07T10:45:32Z) - Bayesian Imitation Learning for End-to-End Mobile Manipulation [80.47771322489422]
Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities.
We show that using the Variational Information Bottleneck to regularize convolutional neural networks improves generalization to held-out domains.
We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities.
arXiv Detail & Related papers (2022-02-15T17:38:30Z) - Bandit Quickest Changepoint Detection [55.855465482260165]
Continuous monitoring of every sensor can be expensive due to resource constraints.
We derive an information-theoretic lower bound on the detection delay for a general class of finitely parameterized probability distributions.
We propose a computationally efficient online sensing scheme, which seamlessly balances the need for exploration of different sensing options with exploitation of querying informative actions.
arXiv Detail & Related papers (2021-07-22T07:25:35Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.