Improving Event Representation via Simultaneous Weakly Supervised
Contrastive Learning and Clustering
- URL: http://arxiv.org/abs/2203.07633v1
- Date: Tue, 15 Mar 2022 04:12:00 GMT
- Title: Improving Event Representation via Simultaneous Weakly Supervised
Contrastive Learning and Clustering
- Authors: Jun Gao, Wei Wang, Changlong Yu, Huan Zhao, Wilfred Ng, Ruifeng Xu
- Abstract summary: We present SWCC: a Simultaneous Weakly supervised Contrastive learning and Clustering framework for event representation learning.
For model training, SWCC learns representations by simultaneously performing weakly supervised contrastive learning and prototype-based clustering.
- Score: 31.841780703374955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Representations of events described in text are important for various tasks.
In this work, we present SWCC: a Simultaneous Weakly supervised Contrastive
learning and Clustering framework for event representation learning. SWCC
learns event representations by making better use of co-occurrence information
of events. Specifically, we introduce a weakly supervised contrastive learning
method that allows us to consider multiple positives and multiple negatives,
and a prototype-based clustering method that avoids semantically related events
being pulled apart. For model training, SWCC learns representations by
simultaneously performing weakly supervised contrastive learning and
prototype-based clustering. Experimental results show that SWCC outperforms
other baselines on Hard Similarity and Transitive Sentence Similarity tasks. In
addition, a thorough analysis of the prototype-based clustering method
demonstrates that the learned prototype vectors are able to implicitly capture
various relations between events.
Related papers
- Contrastive Learning Subspace for Text Clustering [4.065026352441705]
We propose a novel text clustering approach called Subspace Contrastive Learning (SCL)
The proposed SCL consists of two main modules: (1) a self-expressive module that constructs virtual positive samples and (2) a contrastive learning module that further learns a discriminative subspace to capture task-specific cluster-wise relationships among texts.
Experimental results show that the proposed SCL method not only has achieved superior results on multiple task clustering datasets but also has less complexity in positive sample construction.
arXiv Detail & Related papers (2024-08-26T09:08:26Z) - In-context Contrastive Learning for Event Causality Identification [26.132189768472067]
Event Causality Identification aims at determining the existence of a causal relation between two events.
Recent prompt learning-based approaches have shown promising improvements on the ECI task.
This paper proposes an In-Context Contrastive Learning model that utilizes contrastive learning to enhance the effectiveness of both positive and negative demonstrations.
arXiv Detail & Related papers (2024-05-17T03:32:15Z) - Consistency Enhancement-Based Deep Multiview Clustering via Contrastive Learning [16.142448870120027]
We propose a consistent enhancement-based deep MVC method via contrastive learning (C CEC)
Specifically, semantic connection blocks are incorporated into a feature representation to preserve the consistent information among multiple views.
Experiments conducted on five datasets demonstrate the effectiveness and superiority of our method in comparison with the state-of-the-art (SOTA) methods.
arXiv Detail & Related papers (2024-01-23T10:56:01Z) - Robust Representation Learning by Clustering with Bisimulation Metrics
for Visual Reinforcement Learning with Distractions [9.088460902782547]
Clustering with Bisimulation Metrics (CBM) learns robust representations by grouping visual observations in the latent space.
CBM alternates between two steps: (1) grouping observations by measuring their bisimulation distances to the learned prototypes; (2) learning a set of prototypes according to the current cluster assignments.
Experiments demonstrate that CBM significantly improves the sample efficiency of popular visual RL algorithms.
arXiv Detail & Related papers (2023-02-12T13:27:34Z) - Adversarial Contrastive Learning by Permuting Cluster Assignments [0.8862707047517914]
We propose SwARo, an adversarial contrastive framework that incorporates cluster assignment permutations to generate representative adversarial samples.
We evaluate SwARo on multiple benchmark datasets and against various white-box and black-box attacks, obtaining consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2022-04-21T17:49:52Z) - Weak Augmentation Guided Relational Self-Supervised Learning [80.0680103295137]
We introduce a novel relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances.
Our proposed method employs sharpened distribution of pairwise similarities among different instances as textitrelation metric.
Experimental results show that our proposed ReSSL substantially outperforms the state-of-the-art methods across different network architectures.
arXiv Detail & Related papers (2022-03-16T16:14:19Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Learning Constraints and Descriptive Segmentation for Subevent Detection [74.48201657623218]
We propose an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction.
We adopt Rectifier Networks for constraint learning and then convert the learned constraints to a regularization term in the loss function of the neural model.
arXiv Detail & Related papers (2021-09-13T20:50:37Z) - ReSSL: Relational Self-Supervised Learning with Weak Augmentation [68.47096022526927]
Self-supervised learning has achieved great success in learning visual representations without data annotations.
We introduce a novel relational SSL paradigm that learns representations by modeling the relationship between different instances.
Our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency.
arXiv Detail & Related papers (2021-07-20T06:53:07Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Graph Contrastive Clustering [131.67881457114316]
We propose a novel graph contrastive learning framework, which is then applied to the clustering task and we come up with the Graph Constrastive Clustering(GCC) method.
Specifically, on the one hand, the graph Laplacian based contrastive loss is proposed to learn more discriminative and clustering-friendly features.
On the other hand, a novel graph-based contrastive learning strategy is proposed to learn more compact clustering assignments.
arXiv Detail & Related papers (2021-04-03T15:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.