Disentangled Adversarial Transfer Learning for Physiological Biosignals
- URL: http://arxiv.org/abs/2004.08289v1
- Date: Wed, 15 Apr 2020 01:56:56 GMT
- Title: Disentangled Adversarial Transfer Learning for Physiological Biosignals
- Authors: Mo Han, Ozan Ozdenizci, Ye Wang, Toshiaki Koike-Akino, Deniz Erdogmus
- Abstract summary: We propose an adversarial inference approach for transfer learning to extract disentangled nuisance-robust representations from physiological biosignal data.
Results on cross-subjects transfer evaluations demonstrate the benefits of the proposed adversarial framework.
- Score: 24.02384472840036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments in wearable sensors demonstrate promising results for
monitoring physiological status in effective and comfortable ways. One major
challenge of physiological status assessment is the problem of transfer
learning caused by the domain inconsistency of biosignals across users or
different recording sessions from the same user. We propose an adversarial
inference approach for transfer learning to extract disentangled
nuisance-robust representations from physiological biosignal data in stress
status level assessment. We exploit the trade-off between task-related features
and person-discriminative information by using both an adversary network and a
nuisance network to jointly manipulate and disentangle the learned latent
representations by the encoder, which are then input to a discriminative
classifier. Results on cross-subjects transfer evaluations demonstrate the
benefits of the proposed adversarial framework, and thus show its capabilities
to adapt to a broader range of subjects. Finally we highlight that our proposed
adversarial transfer learning approach is also applicable to other deep feature
learning frameworks.
Related papers
- Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning [32.122425860826525]
error backpropagation has faced criticism for its lack of biological plausibility.
We propose counter-current learning (CCL), a biologically plausible framework for credit assignment in neural networks.
Our work presents a direction for biologically inspired and plausible learning algorithms, offering an alternative mechanism of learning and adaptation in neural networks.
arXiv Detail & Related papers (2024-09-30T00:47:13Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Transfer Learning on Heterogeneous Feature Spaces for Treatment Effects
Estimation [103.55894890759376]
This paper introduces several building blocks that use representation learning to handle the heterogeneous feature spaces.
We show how these building blocks can be used to recover transfer learning equivalents of the standard CATE learners.
arXiv Detail & Related papers (2022-10-08T16:41:02Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Discriminative Singular Spectrum Classifier with Applications on
Bioacoustic Signal Recognition [67.4171845020675]
We present a bioacoustic signal classifier equipped with a discriminative mechanism to extract useful features for analysis and classification efficiently.
Unlike current bioacoustic recognition methods, which are task-oriented, the proposed model relies on transforming the input signals into vector subspaces.
The validity of the proposed method is verified using three challenging bioacoustic datasets containing anuran, bee, and mosquito species.
arXiv Detail & Related papers (2021-03-18T11:01:21Z) - Universal Physiological Representation Learning with Soft-Disentangled
Rateless Autoencoders [24.02384472840036]
We propose a method of adversarial feature encoding with the concept of a Rateless Autoencoder (RAE)
We achieve a good trade-off between user-specific and task-relevant features by adopting additional adversarial networks.
Results on cross-subject transfer evaluations show the advantages of the proposed framework, with up to an 11.6% improvement in the average subject-transfer classification accuracy.
arXiv Detail & Related papers (2020-09-28T16:25:12Z) - Disentangled Adversarial Autoencoder for Subject-Invariant Physiological
Feature Extraction [24.02384472840036]
We propose an adversarial feature extractor for transfer learning to exploit disentangled universal representations.
Results on cross-subject transfer evaluations exhibit the benefits of the proposed framework, with up to 8.8% improvement in average accuracy of classification.
arXiv Detail & Related papers (2020-08-26T07:45:24Z) - Derivation of Information-Theoretically Optimal Adversarial Attacks with
Applications to Robust Machine Learning [11.206758778146288]
We consider the theoretical problem of designing an optimal adversarial attack on a decision system.
We present derivations of the optimal adversarial attacks for discrete and continuous signals of interest.
We show that it is much harder to achieve adversarial attacks for minimizing mutual information when multiple redundant copies of the input signal are available.
arXiv Detail & Related papers (2020-07-28T07:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.