Hyperbolic Disentangled Representation for Fine-Grained Aspect
Extraction
- URL: http://arxiv.org/abs/2112.09215v1
- Date: Thu, 16 Dec 2021 21:47:28 GMT
- Title: Hyperbolic Disentangled Representation for Fine-Grained Aspect
Extraction
- Authors: Chang-You Tai, Ming-Yao Li, Lun-Wei Ku
- Abstract summary: HDAE is a hyperbolic disentangled aspect extractor for user reviews.
It achieves average F1 performance gains of 18.2% and 24.1% on Amazon product review and restaurant review datasets.
- Score: 5.545062009366532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic identification of salient aspects from user reviews is especially
useful for opinion analysis. There has been significant progress in utilizing
weakly supervised approaches, which require only a small set of seed words for
training aspect classifiers. However, there is always room for improvement.
First, no weakly supervised approaches fully utilize latent hierarchies between
words. Second, each seed words representation should have different latent
semantics and be distinct when it represents a different aspect. In this paper,
we propose HDAE, a hyperbolic disentangled aspect extractor in which a
hyperbolic aspect classifier captures words latent hierarchies, and
aspect-disentangled representation models the distinct latent semantics of each
seed word. Compared to previous baselines, HDAE achieves average F1 performance
gains of 18.2% and 24.1% on Amazon product review and restaurant review
datasets, respectively. In addition, the em-bedding visualization experience
demonstrates that HDAE is a more effective approach to leveraging seed words.
An ablation study and a case study further attest to the effectiveness of the
proposed components
Related papers
- A Unified Label-Aware Contrastive Learning Framework for Few-Shot Named Entity Recognition [6.468625143772815]
We propose a unified label-aware token-level contrastive learning framework.
Our approach enriches the context by utilizing label semantics as suffix prompts.
It simultaneously optimize context-native and context-label contrastive learning objectives.
arXiv Detail & Related papers (2024-04-26T06:19:21Z) - LEAF: Unveiling Two Sides of the Same Coin in Semi-supervised Facial Expression Recognition [56.22672276092373]
Semi-supervised learning has emerged as a promising approach to tackle the challenge of label scarcity in facial expression recognition.
We propose a unified framework termed hierarchicaL dEcoupling And Fusing to coordinate expression-relevant representations and pseudo-labels.
We show that LEAF outperforms state-of-the-art semi-supervised FER methods, effectively leveraging both labeled and unlabeled data.
arXiv Detail & Related papers (2024-04-23T13:43:33Z) - A Self-enhancement Multitask Framework for Unsupervised Aspect Category
Detection [0.24578723416255754]
This work addresses the problem of unsupervised Aspect Category Detection using a small set of seed words.
We propose a framework that automatically enhances the quality of initial seed words and selects high-quality sentences for training.
In addition, we jointly train Aspect Category Detection with Aspect Term Extraction and Aspect Term Polarity to further enhance performance.
arXiv Detail & Related papers (2023-11-16T09:35:24Z) - Reflection Invariance Learning for Few-shot Semantic Segmentation [53.20466630330429]
Few-shot semantic segmentation (FSS) aims to segment objects of unseen classes in query images with only a few annotated support images.
This paper proposes a fresh few-shot segmentation framework to mine the reflection invariance in a multi-view matching manner.
Experiments on both PASCAL-$5textiti$ and COCO-$20textiti$ datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-01T15:14:58Z) - BERM: Training the Balanced and Extractable Representation for Matching
to Improve Generalization Ability of Dense Retrieval [54.66399120084227]
We propose a novel method to improve the generalization of dense retrieval via capturing matching signal called BERM.
Dense retrieval has shown promise in the first-stage retrieval process when trained on in-domain labeled datasets.
arXiv Detail & Related papers (2023-05-18T15:43:09Z) - Semantic Prompt for Few-Shot Image Recognition [76.68959583129335]
We propose a novel Semantic Prompt (SP) approach for few-shot learning.
The proposed approach achieves promising results, improving the 1-shot learning accuracy by 3.67% on average.
arXiv Detail & Related papers (2023-03-24T16:32:19Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Self-Supervised Detection of Contextual Synonyms in a Multi-Class
Setting: Phenotype Annotation Use Case [11.912581294872767]
Contextualised word embeddings is a powerful tool to detect contextual synonyms.
We propose a self-supervised pre-training approach which is able to detect contextual synonyms of concepts being training on the data created by shallow matching.
arXiv Detail & Related papers (2021-09-04T21:35:01Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.