How to Encode Domain Information in Relation Classification
- URL: http://arxiv.org/abs/2404.13760v1
- Date: Sun, 21 Apr 2024 20:16:35 GMT
- Title: How to Encode Domain Information in Relation Classification
- Authors: Elisa Bassignana, Viggo Unmack Gascou, Frida Nøhr Laustsen, Gustav Kristensen, Marie Haahr Petersen, Rob van der Goot, Barbara Plank,
- Abstract summary: Current language models require a lot of training data to obtain high performance.
For Relation Classification (RC), many datasets are domain-specific.
We explore a multi-domain training setup for RC, and attempt to improve performance by encoding domain information.
- Score: 28.006694890849374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current language models require a lot of training data to obtain high performance. For Relation Classification (RC), many datasets are domain-specific, so combining datasets to obtain better performance is non-trivial. We explore a multi-domain training setup for RC, and attempt to improve performance by encoding domain information. Our proposed models improve > 2 Macro-F1 against the baseline setup, and our analysis reveals that not all the labels benefit the same: The classes which occupy a similar space across domains (i.e., their interpretation is close across them, for example "physical") benefit the least, while domain-dependent relations (e.g., "part-of'') improve the most when encoding domain information.
Related papers
- Task Oriented In-Domain Data Augmentation [38.525017729123114]
Large Language Models (LLMs) have shown superior performance in various applications and fields.
To achieve better performance on specialized domains such as law and advertisement, LLMs are often continue pre-trained on in-domain data.
We propose TRAIT, a task-oriented in-domain data augmentation framework.
arXiv Detail & Related papers (2024-06-24T14:58:11Z) - Using Language to Extend to Unseen Domains [81.37175826824625]
It is expensive to collect training data for every possible domain that a vision model may encounter when deployed.
We consider how simply verbalizing the training domain as well as domains we want to extend to but do not have data for can improve robustness.
Using a multimodal model with a joint image and language embedding space, our method LADS learns a transformation of the image embeddings from the training domain to each unseen test domain.
arXiv Detail & Related papers (2022-10-18T01:14:02Z) - M2D2: A Massively Multi-domain Language Modeling Dataset [76.13062203588089]
We present M2D2, a fine-grained, massively multi-domain corpus for studying domain adaptation (LMs)
Using categories derived from Wikipedia and ArXiv, we organize the domains in each data source into 22 groups.
We show the benefits of adapting the LM along a domain hierarchy; adapting to smaller amounts of fine-grained domain-specific data can lead to larger in-domain performance gains.
arXiv Detail & Related papers (2022-10-13T21:34:52Z) - Efficient Hierarchical Domain Adaptation for Pretrained Language Models [77.02962815423658]
Generative language models are trained on diverse, general domain corpora.
We introduce a method to scale domain adaptation to many diverse domains using a computationally efficient adapter approach.
arXiv Detail & Related papers (2021-12-16T11:09:29Z) - Data Augmentation for Cross-Domain Named Entity Recognition [22.66649873447105]
We study cross-domain data augmentation for the named entity recognition task.
We propose a novel neural architecture to transform the data representation from a high-resource to a low-resource domain.
We show that transforming the data to the low-resource domain representation achieves significant improvements over only using data from high-resource domains.
arXiv Detail & Related papers (2021-09-04T00:50:55Z) - Semantic Segmentation on Multiple Visual Domains [0.0]
Training models on multiple existing domains is desired to increase the output label-space.
In this paper a method for this is proposed for the datasets Cityscapes, SUIM and SUN RGB-D, by creating a label-space that spans all classes of the datasets.
Results show that accuracy of the multi-domain model has higher accuracy than all baseline models together, if hardware performance is equalized.
arXiv Detail & Related papers (2021-07-09T09:34:51Z) - Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
Pre-Training [67.71228426496013]
We show that using target domain data during pre-training leads to large performance improvements across a variety of setups.
We find that pre-training on multiple domains improves performance generalization on domains not seen during training.
arXiv Detail & Related papers (2021-04-02T12:53:15Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization [78.93669377251396]
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains.
We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters.
arXiv Detail & Related papers (2020-04-30T15:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.