My Health Sensor, my Classifier: Adapting a Trained Classifier to
Unlabeled End-User Data
- URL: http://arxiv.org/abs/2009.10799v1
- Date: Tue, 22 Sep 2020 20:27:35 GMT
- Title: My Health Sensor, my Classifier: Adapting a Trained Classifier to
Unlabeled End-User Data
- Authors: Konstantinos Nikolaidis, Stein Kristiansen, Thomas Plagemann, Vera
Goebel, Knut Liest{\o}l, Mohan Kankanhalli, Gunn Marit Traaen, Britt
{\O}verland, Harriet Akre, Lars Aaker{\o}y, Sigurd Steinshamn
- Abstract summary: In this work, we present an approach for unsupervised domain adaptation (DA) with the constraint, that the labeled source data are not directly available.
Our solution, iteratively labels only high confidence sub-regions of the target data distribution, based on the belief of the classifier.
The goal is to apply the proposed approach on DA for the task of sleep apnea detection and achieve personalization based on the needs of the patient.
- Score: 0.5091527753265949
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present an approach for unsupervised domain adaptation (DA)
with the constraint, that the labeled source data are not directly available,
and instead only access to a classifier trained on the source data is provided.
Our solution, iteratively labels only high confidence sub-regions of the target
data distribution, based on the belief of the classifier. Then it iteratively
learns new classifiers from the expanding high-confidence dataset. The goal is
to apply the proposed approach on DA for the task of sleep apnea detection and
achieve personalization based on the needs of the patient. In a series of
experiments with both open and closed sleep monitoring datasets, the proposed
approach is applied to data from different sensors, for DA between the
different datasets. The proposed approach outperforms in all experiments the
classifier trained in the source domain, with an improvement of the kappa
coefficient that varies from 0.012 to 0.242. Additionally, our solution is
applied to digit classification DA between three well established digit
datasets, to investigate the generalizability of the approach, and to allow for
comparison with related work. Even without direct access to the source data, it
achieves good results, and outperforms several well established unsupervised DA
methods.
Related papers
- Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Self-Trained One-class Classification for Unsupervised Anomaly Detection [56.35424872736276]
Anomaly detection (AD) has various applications across domains, from manufacturing to healthcare.
In this work, we focus on unsupervised AD problems whose entire training data are unlabeled and may contain both normal and anomalous samples.
To tackle this problem, we build a robust one-class classification framework via data refinement.
We show that our method outperforms state-of-the-art one-class classification method by 6.3 AUC and 12.5 average precision.
arXiv Detail & Related papers (2021-06-11T01:36:08Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Domain Impression: A Source Data Free Domain Adaptation Method [27.19677042654432]
Unsupervised domain adaptation methods solve the adaptation problem for an unlabeled target set, assuming that the source dataset is available with all labels.
This paper proposes a domain adaptation technique that does not need any source data.
Instead of the source data, we are only provided with a classifier that is trained on the source data.
arXiv Detail & Related papers (2021-02-17T19:50:49Z) - Source-free Domain Adaptation via Distributional Alignment by Matching
Batch Normalization Statistics [85.75352990739154]
We propose a novel domain adaptation method for the source-free setting.
We use batch normalization statistics stored in the pretrained model to approximate the distribution of unobserved source data.
Our method achieves competitive performance with state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-01-19T14:22:33Z) - Deep Adversarial Domain Adaptation Based on Multi-layer Joint Kernelized
Distance [30.452492118887182]
Domain adaptation refers to the learning scenario that a model learned from the source data is applied on the target data.
The distribution discrepancy between source data and target data can substantially affect the adaptation performance.
A deep adversarial domain adaptation model based on a multi-layer joint kernelized distance metric is proposed.
arXiv Detail & Related papers (2020-10-09T02:32:48Z) - Deep Active Learning for Biased Datasets via Fisher Kernel
Self-Supervision [5.352699766206807]
Active learning (AL) aims to minimize labeling efforts for data-demanding deep neural networks (DNNs)
We propose a low-complexity method for feature density matching using self-supervised Fisher kernel (FK)
Our method outperforms state-of-the-art methods on MNIST, SVHN, and ImageNet classification while requiring only 1/10th of processing.
arXiv Detail & Related papers (2020-03-01T03:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.