Multi-level Contrast Network for Wearables-based Joint Activity
Segmentation and Recognition
- URL: http://arxiv.org/abs/2208.07547v1
- Date: Tue, 16 Aug 2022 05:39:02 GMT
- Title: Multi-level Contrast Network for Wearables-based Joint Activity
Segmentation and Recognition
- Authors: Songpengcheng Xia, Lei Chu, Ling Pei, Wenxian Yu, Robert C. Qiu
- Abstract summary: Human activity recognition (HAR) with wearables is promising research that can be widely adopted in many smart healthcare applications.
Most HAR algorithms are susceptible to the multi-class windows problem that is essential yet rarely exploited.
We introduce the segmentation technology into HAR, yielding joint activity segmentation and recognition.
- Score: 10.828099015828693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human activity recognition (HAR) with wearables is promising research that
can be widely adopted in many smart healthcare applications. In recent years,
the deep learning-based HAR models have achieved impressive recognition
performance. However, most HAR algorithms are susceptible to the multi-class
windows problem that is essential yet rarely exploited. In this paper, we
propose to relieve this challenging problem by introducing the segmentation
technology into HAR, yielding joint activity segmentation and recognition.
Especially, we introduce the Multi-Stage Temporal Convolutional Network
(MS-TCN) architecture for sample-level activity prediction to joint segment and
recognize the activity sequence. Furthermore, to enhance the robustness of HAR
against the inter-class similarity and intra-class heterogeneity, a multi-level
contrastive loss, containing the sample-level and segment-level contrast, has
been proposed to learn a well-structured embedding space for better activity
segmentation and recognition performance. Finally, with comprehensive
experiments, we verify the effectiveness of the proposed method on two public
HAR datasets, achieving significant improvements in the various evaluation
metrics.
Related papers
- Wearable-based behaviour interpolation for semi-supervised human activity recognition [27.895342617584085]
We introduce a deep semi-supervised Human Activity Recognition (HAR) approach, MixHAR, which concurrently uses labelled and unlabelled activities.
Our results demonstrate that MixHAR significantly improves performance, underscoring the potential of deep semi-supervised techniques in HAR.
arXiv Detail & Related papers (2024-05-24T22:21:24Z) - Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation [8.46894039954642]
We propose a novel multi-scale token adaptation algorithm for interactive segmentation.
By performing top-k operations across multi-scale tokens, the computational complexity is greatly simplified.
We also propose a token learning algorithm based on contrastive loss.
arXiv Detail & Related papers (2024-01-09T07:59:42Z) - Timestamp-supervised Wearable-based Activity Segmentation and
Recognition with Contrastive Learning and Order-Preserving Optimal Transport [11.837401473598288]
We propose a novel method for joint activity segmentation and recognition with timestamp supervision.
The prototypes are estimated by class-activation maps to form a sample-to-prototype contrast module.
Comprehensive experiments on four public HAR datasets demonstrate that our model trained with timestamp supervision is superior to the state-of-the-art weakly-supervised methods.
arXiv Detail & Related papers (2023-10-13T14:00:49Z) - Contrastive Learning with Cross-Modal Knowledge Mining for Multimodal
Human Activity Recognition [1.869225486385596]
We explore the hypothesis that leveraging multiple modalities can lead to better recognition.
We extend a number of recent contrastive self-supervised approaches for the task of Human Activity Recognition.
We propose a flexible, general-purpose framework for performing multimodal self-supervised learning.
arXiv Detail & Related papers (2022-05-20T10:39:16Z) - Consistency and Diversity induced Human Motion Segmentation [231.36289425663702]
We propose a novel Consistency and Diversity induced human Motion (CDMS) algorithm.
Our model factorizes the source and target data into distinct multi-layer feature spaces.
A multi-mutual learning strategy is carried out to reduce the domain gap between the source and target data.
arXiv Detail & Related papers (2022-02-10T06:23:56Z) - Few-Shot Fine-Grained Action Recognition via Bidirectional Attention and
Contrastive Meta-Learning [51.03781020616402]
Fine-grained action recognition is attracting increasing attention due to the emerging demand of specific action understanding in real-world applications.
We propose a few-shot fine-grained action recognition problem, aiming to recognize novel fine-grained actions with only few samples given for each class.
Although progress has been made in coarse-grained actions, existing few-shot recognition methods encounter two issues handling fine-grained actions.
arXiv Detail & Related papers (2021-08-15T02:21:01Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - Attend And Discriminate: Beyond the State-of-the-Art for Human Activity
Recognition using Wearable Sensors [22.786406177997172]
Wearables are fundamental to improving our understanding of human activities.
We rigorously explore new opportunities to learn enriched and highly discriminating activity representations.
Our contributions achieves new state-of-the-art performance on four diverse activity recognition problem benchmarks.
arXiv Detail & Related papers (2020-07-14T16:44:16Z) - Spectrum-Guided Adversarial Disparity Learning [52.293230153385124]
We propose a novel end-to-end knowledge directed adversarial learning framework.
It portrays the class-conditioned intraclass disparity using two competitive encoding distributions and learns the purified latent codes by denoising learned disparity.
The experiments on four HAR benchmark datasets demonstrate the robustness and generalization of our proposed methods over a set of state-of-the-art.
arXiv Detail & Related papers (2020-07-14T05:46:27Z) - Heterogeneous Network Representation Learning: A Unified Framework with
Survey and Benchmark [57.10850350508929]
We aim to provide a unified framework to summarize and evaluate existing research on heterogeneous network embedding (HNE)
As the first contribution, we provide a generic paradigm for the systematic categorization and analysis over the merits of various existing HNE algorithms.
As the second contribution, we create four benchmark datasets with various properties regarding scale, structure, attribute/label availability, and etcfrom different sources.
As the third contribution, we create friendly interfaces for 13 popular HNE algorithms, and provide all-around comparisons among them over multiple tasks and experimental settings.
arXiv Detail & Related papers (2020-04-01T03:42:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.