Generalizing through Forgetting -- Domain Generalization for Symptom
Event Extraction in Clinical Notes
- URL: http://arxiv.org/abs/2209.09485v1
- Date: Tue, 20 Sep 2022 05:53:22 GMT
- Title: Generalizing through Forgetting -- Domain Generalization for Symptom
Event Extraction in Clinical Notes
- Authors: Sitong Zhou, Kevin Lybarger, Meliha Yetisgen Mari Ostendorf
- Abstract summary: We present domain generalization for symptom extraction using pretraining and fine-tuning data.
We propose a domain generalization method that dynamically masks frequent symptoms words in the source domain.
Our experiments indicate that masking and adaptive pretraining methods can significantly improve performance when the source domain is more distant from the target domain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Symptom information is primarily documented in free-text clinical notes and
is not directly accessible for downstream applications. To address this
challenge, information extraction approaches that can handle clinical language
variation across different institutions and specialties are needed. In this
paper, we present domain generalization for symptom extraction using
pretraining and fine-tuning data that differs from the target domain in terms
of institution and/or specialty and patient population. We extract symptom
events using a transformer-based joint entity and relation extraction method.
To reduce reliance on domain-specific features, we propose a domain
generalization method that dynamically masks frequent symptoms words in the
source domain. Additionally, we pretrain the transformer language model (LM) on
task-related unlabeled texts for better representation. Our experiments
indicate that masking and adaptive pretraining methods can significantly
improve performance when the source domain is more distant from the target
domain.
Related papers
- Source-Free Domain Adaptation for Medical Image Segmentation via
Prototype-Anchored Feature Alignment and Contrastive Learning [57.43322536718131]
We present a two-stage source-free domain adaptation (SFDA) framework for medical image segmentation.
In the prototype-anchored feature alignment stage, we first utilize the weights of the pre-trained pixel-wise classifier as source prototypes.
Then, we introduce the bi-directional transport to align the target features with class prototypes by minimizing its expected cost.
arXiv Detail & Related papers (2023-07-19T06:07:12Z) - SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for
Classification in Low-Resource Domains [14.096170976149521]
SwitchPrompt is a novel and lightweight prompting methodology for adaptation of language models trained on datasets from the general domain to diverse low-resource domains.
Our few-shot experiments on three text classification benchmarks demonstrate the efficacy of the general-domain pre-trained language models when used with SwitchPrompt.
They often even outperform their domain-specific counterparts trained with baseline state-of-the-art prompting methods by up to 10.7% performance increase in accuracy.
arXiv Detail & Related papers (2023-02-14T07:14:08Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - DILBERT: Customized Pre-Training for Domain Adaptation withCategory
Shift, with an Application to Aspect Extraction [25.075552473110676]
A generic approach towards the pre-training procedure can naturally be sub-optimal in some cases.
This paper presents a new fine-tuning scheme for BERT, which aims to address the above challenges.
We name this scheme DILBERT: Domain Invariant Learning with BERT, and customize it for aspect extraction in the unsupervised domain adaptation setting.
arXiv Detail & Related papers (2021-09-01T18:49:44Z) - Self-Rule to Adapt: Generalized Multi-source Feature Learning Using
Unsupervised Domain Adaptation for Colorectal Cancer Tissue Detection [9.074125289002911]
Supervised learning is constrained by the availability of labeled data.
We propose SRA, which takes advantage of self-supervised learning to perform domain adaptation.
arXiv Detail & Related papers (2021-08-20T13:52:33Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z) - Domain Adversarial Fine-Tuning as an Effective Regularizer [80.14528207465412]
In Natural Language Processing (NLP), pretrained language models (LMs) that are transferred to downstream tasks have been recently shown to achieve state-of-the-art results.
Standard fine-tuning can degrade the general-domain representations captured during pretraining.
We introduce a new regularization technique, AFTER; domain Adversarial Fine-Tuning as an Effective Regularizer.
arXiv Detail & Related papers (2020-09-28T14:35:06Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.