Representation Learning for Tablet and Paper Domain Adaptation in Favor
of Online Handwriting Recognition
- URL: http://arxiv.org/abs/2301.06293v1
- Date: Mon, 16 Jan 2023 07:48:37 GMT
- Title: Representation Learning for Tablet and Paper Domain Adaptation in Favor
of Online Handwriting Recognition
- Authors: Felix Ott and David R\"ugamer and Lucas Heublein and Bernd Bischl and
Christopher Mutschler
- Abstract summary: The performance of a machine learning model degrades when it is applied to data from a similar but different domain than the data it has initially been trained on.
This paper proposes a supervised domain adaptation (DA) approach to enhance learning for online handwriting (OnHW) recognition between tablet and paper data.
- Score: 3.071136270246468
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of a machine learning model degrades when it is applied to
data from a similar but different domain than the data it has initially been
trained on. The goal of domain adaptation (DA) is to mitigate this domain shift
problem by searching for an optimal feature transformation to learn a
domain-invariant representation. Such a domain shift can appear in handwriting
recognition (HWR) applications where the motion pattern of the hand and with
that the motion pattern of the pen is different for writing on paper and on
tablet. This becomes visible in the sensor data for online handwriting (OnHW)
from pens with integrated inertial measurement units. This paper proposes a
supervised DA approach to enhance learning for OnHW recognition between tablet
and paper data. Our method exploits loss functions such as maximum mean
discrepancy and correlation alignment to learn a domain-invariant feature
representation (i.e., similar covariances between tablet and paper features).
We use a triplet loss that takes negative samples of the auxiliary domain
(i.e., paper samples) to increase the amount of samples of the tablet dataset.
We conduct an evaluation on novel sequence-based OnHW datasets (i.e., words)
and show an improvement on the paper domain with an early fusion strategy by
using pairwise learning.
Related papers
- First-Order Manifold Data Augmentation for Regression Learning [4.910937238451485]
We introduce FOMA: a new data-driven domain-independent data augmentation method.
We evaluate FOMA on in-distribution generalization and out-of-distribution benchmarks, and we show that it improves the generalization of several neural architectures.
arXiv Detail & Related papers (2024-06-16T12:35:05Z) - Cross-Domain Policy Adaptation by Capturing Representation Mismatch [53.087413751430255]
It is vital to learn effective policies that can be transferred to different domains with dynamics discrepancies in reinforcement learning (RL)
In this paper, we consider dynamics adaptation settings where there exists dynamics mismatch between the source domain and the target domain.
We perform representation learning only in the target domain and measure the representation deviations on the transitions from the source domain.
arXiv Detail & Related papers (2024-05-24T09:06:12Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Domain Adaptation for Time-Series Classification to Mitigate Covariate
Shift [3.071136270246468]
This paper proposes a novel supervised domain adaptation based on two steps.
First, we search for an optimal class-dependent transformation from the source to the target domain from a few samples.
Second, we use embedding similarity techniques to select the corresponding transformation at inference.
arXiv Detail & Related papers (2022-04-07T10:27:14Z) - Con$^{2}$DA: Simplifying Semi-supervised Domain Adaptation by Learning
Consistent and Contrastive Feature Representations [1.2891210250935146]
Con$2$DA is a framework that extends recent advances in semi-supervised learning to the semi-supervised domain adaptation problem.
Our framework generates pairs of associated samples by performing data transformations to a given input.
We use different loss functions to enforce consistency between the feature representations of associated data pairs of samples.
arXiv Detail & Related papers (2022-04-04T15:05:45Z) - DANNTe: a case study of a turbo-machinery sensor virtualization under
domain shift [0.0]
We propose an adversarial learning method to tackle a Domain Adaptation (DA) time series regression task (DANNTe)
The regression aims at building a virtual copy of a sensor installed on a gas turbine, to be used in place of the physical sensor which can be missing in certain situations.
We report a significant improvement in regression performance, compared to the baseline model trained on the source domain only.
arXiv Detail & Related papers (2022-01-11T09:24:33Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Supervised Domain Adaptation: A Graph Embedding Perspective and a
Rectified Experimental Protocol [87.76993857713217]
We show that Domain Adaptation methods using pair-wise relationships between source and target domain data can be formulated as a Graph Embedding.
Specifically, we analyse the loss functions of three existing state-of-the-art Supervised Domain Adaptation methods and demonstrate that they perform Graph Embedding.
arXiv Detail & Related papers (2020-04-23T15:46:20Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.