Multi-Source Deep Domain Adaptation with Weak Supervision for
Time-Series Sensor Data
- URL: http://arxiv.org/abs/2005.10996v1
- Date: Fri, 22 May 2020 04:16:58 GMT
- Title: Multi-Source Deep Domain Adaptation with Weak Supervision for
Time-Series Sensor Data
- Authors: Garrett Wilson, Janardhan Rao Doppa, Diane J. Cook
- Abstract summary: We propose a novel Convolutional deep Domain Adaptation model for Time Series data (CoDATS)
Second, we propose a novel Domain Adaptation with Weak Supervision (DA-WS) method by utilizing weak supervision in the form of target-domain label distributions.
Third, we perform comprehensive experiments on diverse real-world datasets to evaluate the effectiveness of our domain adaptation and weak supervision methods.
- Score: 31.43183992755392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation (DA) offers a valuable means to reuse data and models for
new problem domains. However, robust techniques have not yet been considered
for time series data with varying amounts of data availability. In this paper,
we make three main contributions to fill this gap. First, we propose a novel
Convolutional deep Domain Adaptation model for Time Series data (CoDATS) that
significantly improves accuracy and training time over state-of-the-art DA
strategies on real-world sensor data benchmarks. By utilizing data from
multiple source domains, we increase the usefulness of CoDATS to further
improve accuracy over prior single-source methods, particularly on complex time
series datasets that have high variability between domains. Second, we propose
a novel Domain Adaptation with Weak Supervision (DA-WS) method by utilizing
weak supervision in the form of target-domain label distributions, which may be
easier to collect than additional data labels. Third, we perform comprehensive
experiments on diverse real-world datasets to evaluate the effectiveness of our
domain adaptation and weak supervision methods. Results show that CoDATS for
single-source DA significantly improves over the state-of-the-art methods, and
we achieve additional improvements in accuracy using data from multiple source
domains and weakly supervised signals. Code is available at:
https://github.com/floft/codats
Related papers
- Gradual Source Domain Expansion for Unsupervised Domain Adaptation [45.207132297204424]
Unsupervised domain adaptation (UDA) tries to overcome the need for a large labeled dataset by transferring knowledge from a source dataset to a target dataset.
We propose a gradual source domain expansion (GSDE) algorithm to overcome this problem.
GSDE trains the UDA task several times from scratch, each time reinitializing the network weights, but each time expands the source dataset with target data.
arXiv Detail & Related papers (2023-11-16T06:18:35Z) - T-UDA: Temporal Unsupervised Domain Adaptation in Sequential Point
Clouds [2.5291108878852864]
unsupervised domain adaptation (UDA) methods adapt models trained on one (source) domain with annotations available to another (target) domain for which only unannotated data are available.
We introduce a novel domain adaptation method that leverages the best of both trends. Dubbed T-UDA for "temporal UDA", such a combination yields massive performance gains for the task of 3D semantic segmentation of driving scenes.
arXiv Detail & Related papers (2023-09-15T10:47:12Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - Domain Alignment Meets Fully Test-Time Adaptation [24.546705919244936]
A foundational requirement of a deployed ML model is to generalize to data drawn from a testing distribution that is different from training.
In this paper, we focus on a challenging variant of this problem, where access to the original source data is restricted.
We propose a new approach, CATTAn, that bridges UDA and FTTA, by relaxing the need to access entire source data.
arXiv Detail & Related papers (2022-07-09T03:17:19Z) - Back to the Source: Diffusion-Driven Test-Time Adaptation [77.4229736436935]
Test-time adaptation harnesses test inputs to improve accuracy of a model trained on source data when tested on shifted target data.
We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model.
arXiv Detail & Related papers (2022-07-07T17:14:10Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Unsupervised Domain Adaptive Learning via Synthetic Data for Person
Re-identification [101.1886788396803]
Person re-identification (re-ID) has gained more and more attention due to its widespread applications in video surveillance.
Unfortunately, the mainstream deep learning methods still need a large quantity of labeled data to train models.
In this paper, we develop a data collector to automatically generate synthetic re-ID samples in a computer game, and construct a data labeler to simultaneously annotate them.
arXiv Detail & Related papers (2021-09-12T15:51:41Z) - Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
Pre-Training [67.71228426496013]
We show that using target domain data during pre-training leads to large performance improvements across a variety of setups.
We find that pre-training on multiple domains improves performance generalization on domains not seen during training.
arXiv Detail & Related papers (2021-04-02T12:53:15Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Online Meta-Learning for Multi-Source and Semi-Supervised Domain
Adaptation [4.1799778475823315]
We propose a framework to enhance performance by meta-learning the initial conditions of existing DA algorithms.
We present variants for both multi-source unsupervised domain adaptation (MSDA), and semi-supervised domain adaptation (SSDA)
We achieve state of the art results on several DA benchmarks including the largest scale DomainNet.
arXiv Detail & Related papers (2020-04-09T07:48:22Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.