Attend And Discriminate: Beyond the State-of-the-Art for Human Activity
Recognition using Wearable Sensors
- URL: http://arxiv.org/abs/2007.07172v1
- Date: Tue, 14 Jul 2020 16:44:16 GMT
- Title: Attend And Discriminate: Beyond the State-of-the-Art for Human Activity
Recognition using Wearable Sensors
- Authors: Alireza Abedin, Mahsa Ehsanpour, Qinfeng Shi, Hamid Rezatofighi,
Damith C. Ranasinghe
- Abstract summary: Wearables are fundamental to improving our understanding of human activities.
We rigorously explore new opportunities to learn enriched and highly discriminating activity representations.
Our contributions achieves new state-of-the-art performance on four diverse activity recognition problem benchmarks.
- Score: 22.786406177997172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wearables are fundamental to improving our understanding of human activities,
especially for an increasing number of healthcare applications from
rehabilitation to fine-grained gait analysis. Although our collective know-how
to solve Human Activity Recognition (HAR) problems with wearables has
progressed immensely with end-to-end deep learning paradigms, several
fundamental opportunities remain overlooked. We rigorously explore these new
opportunities to learn enriched and highly discriminating activity
representations. We propose: i) learning to exploit the latent relationships
between multi-channel sensor modalities and specific activities; ii)
investigating the effectiveness of data-agnostic augmentation for multi-modal
sensor data streams to regularize deep HAR models; and iii) incorporating a
classification loss criterion to encourage minimal intra-class representation
differences whilst maximising inter-class differences to achieve more
discriminative features. Our contributions achieves new state-of-the-art
performance on four diverse activity recognition problem benchmarks with large
margins -- with up to 6% relative margin improvement. We extensively validate
the contributions from our design concepts through extensive experiments,
including activity misalignment measures, ablation studies and insights shared
through both quantitative and qualitative studies.
Related papers
- Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts
in Underspecified Visual Tasks [92.32670915472099]
We propose an ensemble diversification framework exploiting the generation of synthetic counterfactuals using Diffusion Probabilistic Models (DPMs)
We show that diffusion-guided diversification can lead models to avert attention from shortcut cues, achieving ensemble diversity performance comparable to previous methods requiring additional data collection.
arXiv Detail & Related papers (2023-10-03T17:37:52Z) - A Matter of Annotation: An Empirical Study on In Situ and Self-Recall Activity Annotations from Wearable Sensors [56.554277096170246]
We present an empirical study that evaluates and contrasts four commonly employed annotation methods in user studies focused on in-the-wild data collection.
For both the user-driven, in situ annotations, where participants annotate their activities during the actual recording process, and the recall methods, where participants retrospectively annotate their data at the end of each day, the participants had the flexibility to select their own set of activity classes and corresponding labels.
arXiv Detail & Related papers (2023-05-15T16:02:56Z) - TASKED: Transformer-based Adversarial learning for human activity
recognition using wearable sensors via Self-KnowledgE Distillation [6.458496335718508]
We propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED)
In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition.
arXiv Detail & Related papers (2022-09-14T11:08:48Z) - Multi-level Contrast Network for Wearables-based Joint Activity
Segmentation and Recognition [10.828099015828693]
Human activity recognition (HAR) with wearables is promising research that can be widely adopted in many smart healthcare applications.
Most HAR algorithms are susceptible to the multi-class windows problem that is essential yet rarely exploited.
We introduce the segmentation technology into HAR, yielding joint activity segmentation and recognition.
arXiv Detail & Related papers (2022-08-16T05:39:02Z) - Contrastive Learning with Cross-Modal Knowledge Mining for Multimodal
Human Activity Recognition [1.869225486385596]
We explore the hypothesis that leveraging multiple modalities can lead to better recognition.
We extend a number of recent contrastive self-supervised approaches for the task of Human Activity Recognition.
We propose a flexible, general-purpose framework for performing multimodal self-supervised learning.
arXiv Detail & Related papers (2022-05-20T10:39:16Z) - ACP++: Action Co-occurrence Priors for Human-Object Interaction
Detection [102.9428507180728]
A common problem in the task of human-object interaction (HOI) detection is that numerous HOI classes have only a small number of labeled examples.
We observe that there exist natural correlations and anti-correlations among human-object interactions.
We present techniques to learn these priors and leverage them for more effective training, especially on rare classes.
arXiv Detail & Related papers (2021-09-09T06:02:50Z) - Few-Shot Fine-Grained Action Recognition via Bidirectional Attention and
Contrastive Meta-Learning [51.03781020616402]
Fine-grained action recognition is attracting increasing attention due to the emerging demand of specific action understanding in real-world applications.
We propose a few-shot fine-grained action recognition problem, aiming to recognize novel fine-grained actions with only few samples given for each class.
Although progress has been made in coarse-grained actions, existing few-shot recognition methods encounter two issues handling fine-grained actions.
arXiv Detail & Related papers (2021-08-15T02:21:01Z) - Weakly-supervised Multi-task Learning for Multimodal Affect Recognition [33.7929682119287]
We propose to leverage datasets using weakly-supervised multi-task learning to improve generalization performance.
Specifically, we explore three multimodal affect recognition tasks: 1) emotion recognition; 2) sentiment analysis; and 3) sarcasm recognition.
Our experimental results show that multi-tasking can benefit all these tasks, achieving an improvement up to 2.9% accuracy and 3.3% F1-score.
arXiv Detail & Related papers (2021-04-23T12:36:19Z) - Detecting Human-Object Interactions with Action Co-occurrence Priors [108.31956827512376]
A common problem in human-object interaction (HOI) detection task is that numerous HOI classes have only a small number of labeled examples.
We observe that there exist natural correlations and anti-correlations among human-object interactions.
We present techniques to learn these priors and leverage them for more effective training, especially in rare classes.
arXiv Detail & Related papers (2020-07-17T02:47:45Z) - Spectrum-Guided Adversarial Disparity Learning [52.293230153385124]
We propose a novel end-to-end knowledge directed adversarial learning framework.
It portrays the class-conditioned intraclass disparity using two competitive encoding distributions and learns the purified latent codes by denoising learned disparity.
The experiments on four HAR benchmark datasets demonstrate the robustness and generalization of our proposed methods over a set of state-of-the-art.
arXiv Detail & Related papers (2020-07-14T05:46:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.