Self-training through Classifier Disagreement for Cross-Domain Opinion
Target Extraction
- URL: http://arxiv.org/abs/2302.14719v1
- Date: Tue, 28 Feb 2023 16:31:17 GMT
- Title: Self-training through Classifier Disagreement for Cross-Domain Opinion
Target Extraction
- Authors: Kai Sun, Richong Zhang, Samuel Mensah, Nikolaos Aletras, Yongyi Mao,
Xudong Liu
- Abstract summary: Opinion target extraction (OTE) or aspect extraction (AE) is a fundamental task in opinion mining.
Recent work focus on cross-domain OTE, which is typically encountered in real-world scenarios.
We propose a new SSL approach that opts for selecting target samples whose model output from a domain-specific teacher and student network disagrees on the unlabelled target data.
- Score: 62.41511766918932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Opinion target extraction (OTE) or aspect extraction (AE) is a fundamental
task in opinion mining that aims to extract the targets (or aspects) on which
opinions have been expressed. Recent work focus on cross-domain OTE, which is
typically encountered in real-world scenarios, where the testing and training
distributions differ. Most methods use domain adversarial neural networks that
aim to reduce the domain gap between the labelled source and unlabelled target
domains to improve target domain performance. However, this approach only
aligns feature distributions and does not account for class-wise feature
alignment, leading to suboptimal results. Semi-supervised learning (SSL) has
been explored as a solution, but is limited by the quality of pseudo-labels
generated by the model. Inspired by the theoretical foundations in domain
adaptation [2], we propose a new SSL approach that opts for selecting target
samples whose model output from a domain-specific teacher and student network
disagree on the unlabelled target data, in an effort to boost the target domain
performance. Extensive experiments on benchmark cross-domain OTE datasets show
that this approach is effective and performs consistently well in settings with
large domain shifts.
Related papers
- MADAv2: Advanced Multi-Anchor Based Active Domain Adaptation
Segmentation [98.09845149258972]
We introduce active sample selection to assist domain adaptation regarding the semantic segmentation task.
With only a little workload to manually annotate these samples, the distortion of the target-domain distribution can be effectively alleviated.
A powerful semi-supervised domain adaptation strategy is proposed to alleviate the long-tail distribution problem.
arXiv Detail & Related papers (2023-01-18T07:55:22Z) - Deep face recognition with clustering based domain adaptation [57.29464116557734]
We propose a new clustering-based domain adaptation method designed for face recognition task in which the source and target domain do not share any classes.
Our method effectively learns the discriminative target feature by aligning the feature domain globally, and, at the meantime, distinguishing the target clusters locally.
arXiv Detail & Related papers (2022-05-27T12:29:11Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Physically-Constrained Transfer Learning through Shared Abundance Space
for Hyperspectral Image Classification [14.840925517957258]
We propose a new transfer learning scheme to bridge the gap between the source and target domains.
The proposed method is referred to as physically-constrained transfer learning through shared abundance space.
arXiv Detail & Related papers (2020-08-19T17:41:37Z) - Domain Adaptation by Class Centroid Matching and Local Manifold
Self-Learning [8.316259570013813]
We propose a novel domain adaptation approach, which can thoroughly explore the data distribution structure of target domain.
We regard the samples within the same cluster in target domain as a whole rather than individuals and assigns pseudo-labels to the target cluster by class centroid matching.
An efficient iterative optimization algorithm is designed to solve the objective function of our proposal with theoretical convergence guarantee.
arXiv Detail & Related papers (2020-03-20T16:59:27Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.