Guided-GAN: Adversarial Representation Learning for Activity Recognition
with Wearables
- URL: http://arxiv.org/abs/2110.05732v1
- Date: Tue, 12 Oct 2021 04:29:21 GMT
- Title: Guided-GAN: Adversarial Representation Learning for Activity Recognition
with Wearables
- Authors: Alireza Abedin, Hamid Rezatofighi, Damith C. Ranasinghe
- Abstract summary: generative adversarial network (GAN) paradigms to learn unsupervised feature representations from wearable sensor data.
Geometrically-Guided GAN or Guided-GAN for the task.
Result: Guided-GAN to outperform existing unsupervised approaches whilst closely approaching the performance with fully supervised representations.
- Score: 9.399840807973545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human activity recognition (HAR) is an important research field in ubiquitous
computing where the acquisition of large-scale labeled sensor data is tedious,
labor-intensive and time consuming. State-of-the-art unsupervised remedies
investigated to alleviate the burdens of data annotations in HAR mainly explore
training autoencoder frameworks. In this paper: we explore generative
adversarial network (GAN) paradigms to learn unsupervised feature
representations from wearable sensor data; and design a new GAN
framework-Geometrically-Guided GAN or Guided-GAN-for the task. To demonstrate
the effectiveness of our formulation, we evaluate the features learned by
Guided-GAN in an unsupervised manner on three downstream classification
benchmarks. Our results demonstrate Guided-GAN to outperform existing
unsupervised approaches whilst closely approaching the performance with fully
supervised learned representations. The proposed approach paves the way to
bridge the gap between unsupervised and supervised human activity recognition
whilst helping to reduce the cost of human data annotation tasks.
Related papers
- Towards Unsupervised Representation Learning: Learning, Evaluating and
Transferring Visual Representations [1.8130068086063336]
We contribute to the field of unsupervised (visual) representation learning from three perspectives.
We design unsupervised, backpropagation-free Convolutional Self-Organizing Neural Networks (CSNNs)
We build upon the widely used (non-)linear evaluation protocol to define pretext- and target-objective-independent metrics.
We contribute CARLANE, the first 3-way sim-to-real domain adaptation benchmark for 2D lane detection, and a method based on self-supervised learning.
arXiv Detail & Related papers (2023-11-30T15:57:55Z) - Sequential Action-Induced Invariant Representation for Reinforcement
Learning [1.2046159151610263]
How to accurately learn task-relevant state representations from high-dimensional observations with visual distractions is a challenging problem in visual reinforcement learning.
We propose a Sequential Action-induced invariant Representation (SAR) method, in which the encoder is optimized by an auxiliary learner to only preserve the components that follow the control signals of sequential actions.
arXiv Detail & Related papers (2023-09-22T05:31:55Z) - Weakly-supervised HOI Detection via Prior-guided Bi-level Representation
Learning [66.00600682711995]
Human object interaction (HOI) detection plays a crucial role in human-centric scene understanding and serves as a fundamental building-block for many vision tasks.
One generalizable and scalable strategy for HOI detection is to use weak supervision, learning from image-level annotations only.
This is inherently challenging due to ambiguous human-object associations, large search space of detecting HOIs and highly noisy training signal.
We develop a CLIP-guided HOI representation capable of incorporating the prior knowledge at both image level and HOI instance level, and adopt a self-taught mechanism to prune incorrect human-object associations.
arXiv Detail & Related papers (2023-03-02T14:41:31Z) - Reinforcement Learning with Prototypical Representations [114.35801511501639]
Proto-RL is a self-supervised framework that ties representation learning with exploration through prototypical representations.
These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations.
This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.
arXiv Detail & Related papers (2021-02-22T18:56:34Z) - Disambiguation of weak supervision with exponential convergence rates [88.99819200562784]
In supervised learning, data are annotated with incomplete yet discriminative information.
In this paper, we focus on partial labelling, an instance of weak supervision where, from a given input, we are given a set of potential targets.
We propose an empirical disambiguation algorithm to recover full supervision from weak supervision.
arXiv Detail & Related papers (2021-02-04T18:14:32Z) - Contrastive Predictive Coding for Human Activity Recognition [5.766384728949437]
We introduce the Contrastive Predictive Coding framework to human activity recognition, which captures the long-term temporal structure of sensor data streams.
CPC-based pre-training is self-supervised, and the resulting learned representations can be integrated into standard activity chains.
It leads to significantly improved recognition performance when only small amounts of labeled training data are available.
arXiv Detail & Related papers (2020-12-09T21:44:36Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Towards Deep Clustering of Human Activities from Wearables [21.198881633580797]
We develop an unsupervised end-to-end learning strategy for the fundamental problem of human activity recognition from wearables.
We show the effectiveness of our approach to jointly learn unsupervised representations for sensory data and generate cluster assignments with strong semantic correspondence to distinct human activities.
arXiv Detail & Related papers (2020-08-02T13:55:24Z) - Spectrum-Guided Adversarial Disparity Learning [52.293230153385124]
We propose a novel end-to-end knowledge directed adversarial learning framework.
It portrays the class-conditioned intraclass disparity using two competitive encoding distributions and learns the purified latent codes by denoising learned disparity.
The experiments on four HAR benchmark datasets demonstrate the robustness and generalization of our proposed methods over a set of state-of-the-art.
arXiv Detail & Related papers (2020-07-14T05:46:27Z) - Domain-Guided Task Decomposition with Self-Training for Detecting
Personal Events in Social Media [11.638298634523945]
Mining social media for tasks such as detecting personal experiences or events, suffer from lexical sparsity, insufficient training data, and inventive lexicons.
To reduce the burden of creating extensive labeled data, we propose to perform these tasks in two steps: 1.
Decomposing the task into domain-specific sub-tasks by identifying key concepts, thus utilizing human domain understanding; 2. Combining the results of learners for each key concept using co-training to reduce the requirements for labeled training data.
arXiv Detail & Related papers (2020-04-21T14:50:31Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.