A Simple and Effective Self-Supervised Contrastive Learning Framework
for Aspect Detection
- URL: http://arxiv.org/abs/2009.09107v2
- Date: Thu, 31 Dec 2020 04:57:28 GMT
- Title: A Simple and Effective Self-Supervised Contrastive Learning Framework
for Aspect Detection
- Authors: Tian Shi and Liuqing Li and Ping Wang and Chandan K. Reddy
- Abstract summary: We propose a self-supervised contrastive learning framework and an attention-based model equipped with a novel smooth self-attention (SSA) module for the UAD task.
Our methods outperform several recent unsupervised and weakly supervised approaches on publicly available benchmark user review datasets.
- Score: 15.36713547251997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised aspect detection (UAD) aims at automatically extracting
interpretable aspects and identifying aspect-specific segments (such as
sentences) from online reviews. However, recent deep learning-based topic
models, specifically aspect-based autoencoder, suffer from several problems,
such as extracting noisy aspects and poorly mapping aspects discovered by
models to the aspects of interest. To tackle these challenges, in this paper,
we first propose a self-supervised contrastive learning framework and an
attention-based model equipped with a novel smooth self-attention (SSA) module
for the UAD task in order to learn better representations for aspects and
review segments. Secondly, we introduce a high-resolution selective mapping
(HRSMap) method to efficiently assign aspects discovered by the model to
aspects of interest. We also propose using a knowledge distilling technique to
further improve the aspect detection performance. Our methods outperform
several recent unsupervised and weakly supervised approaches on publicly
available benchmark user review datasets. Aspect interpretation results show
that extracted aspects are meaningful, have good coverage, and can be easily
mapped to aspects of interest. Ablation studies and attention weight
visualization also demonstrate the effectiveness of SSA and the knowledge
distilling method.
Related papers
- Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future [119.88454942558485]
Underwater object detection (UOD) aims to identify and localise objects in underwater images or videos.
In recent years, artificial intelligence (AI) based methods, especially deep learning methods, have shown promising performance in UOD.
arXiv Detail & Related papers (2024-10-08T00:25:33Z) - Cross-Target Stance Detection: A Survey of Techniques, Datasets, and Challenges [7.242609314791262]
Cross-target stance detection is the task of determining the viewpoint expressed in a text towards a given target.
With the increasing need to analyze and mining viewpoints and opinions online, the task has recently seen a significant surge in interest.
This review paper examines the advancements in cross-target stance detection over the last decade.
arXiv Detail & Related papers (2024-09-20T15:49:14Z) - Deep Learning for Video Anomaly Detection: A Review [52.74513211976795]
Video anomaly detection (VAD) aims to discover behaviors or events deviating from the normality in videos.
In the era of deep learning, a great variety of deep learning based methods are constantly emerging for the VAD task.
This review covers the spectrum of five different categories, namely, semi-supervised, weakly supervised, fully supervised, unsupervised and open-set supervised VAD.
arXiv Detail & Related papers (2024-09-09T07:31:16Z) - Emotic Masked Autoencoder with Attention Fusion for Facial Expression Recognition [1.4374467687356276]
This paper presents an innovative approach integrating the MAE-Face self-supervised learning (SSL) method and multi-view Fusion Attention mechanism for expression classification.
We suggest easy-to-implement and no-training frameworks aimed at highlighting key facial features to determine if such features can serve as guides for the model.
The efficacy of this method is validated by improvements in model performance on the Aff-wild2 dataset.
arXiv Detail & Related papers (2024-03-19T16:21:47Z) - Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - A Self-Distillation Embedded Supervised Affinity Attention Model for
Few-Shot Segmentation [18.417460995287257]
We propose self-distillation embedded supervised affinity attention model to improve the performance of few-shot segmentation task.
Our model significantly improves the performance compared to existing methods.
On COCO-20i dataset, we achieve new state-of-the-art results.
arXiv Detail & Related papers (2021-08-14T18:16:12Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - SparseBERT: Rethinking the Importance Analysis in Self-attention [107.68072039537311]
Transformer-based models are popular for natural language processing (NLP) tasks due to its powerful capacity.
Attention map visualization of a pre-trained model is one direct method for understanding self-attention mechanism.
We propose a Differentiable Attention Mask (DAM) algorithm, which can be also applied in guidance of SparseBERT design.
arXiv Detail & Related papers (2021-02-25T14:13:44Z) - Understanding Failures of Deep Networks via Robust Feature Extraction [44.204907883776045]
We introduce and study a method aimed at characterizing and explaining failures by identifying visual attributes whose presence or absence results in poor performance.
We leverage the representation of a separate robust model to extract interpretable features and then harness these features to identify failure modes.
arXiv Detail & Related papers (2020-12-03T08:33:29Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Toward Tag-free Aspect Based Sentiment Analysis: A Multiple Attention
Network Approach [12.100371588940256]
Multiple-Attention Network (MAN) is capable of extracting both aspect level and overall sentiments from text reviews.
We carry out extensive experiments to demonstrate the strong performance of MAN compared to other state-of-the-art ABSA approaches.
arXiv Detail & Related papers (2020-03-22T20:18:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.