Generative Entity-to-Entity Stance Detection with Knowledge Graph
Augmentation
- URL: http://arxiv.org/abs/2211.01467v1
- Date: Wed, 2 Nov 2022 20:16:42 GMT
- Title: Generative Entity-to-Entity Stance Detection with Knowledge Graph
Augmentation
- Authors: Xinliang Frederick Zhang, Nick Beauchamp, Lu Wang
- Abstract summary: Stance detection is typically framed as predicting the sentiment in a text towards a target entity.
In this paper, we emphasize the need for studying interactions among entities when inferring stances.
We first introduce a new task, entity-to-entity (E2E) stance detection, which primes models to identify entities in their canonical names and discern stances jointly.
- Score: 7.857310305816312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stance detection is typically framed as predicting the sentiment in a given
text towards a target entity. However, this setup overlooks the importance of
the source entity, i.e., who is expressing the opinion. In this paper, we
emphasize the need for studying interactions among entities when inferring
stances. We first introduce a new task, entity-to-entity (E2E) stance
detection, which primes models to identify entities in their canonical names
and discern stances jointly. To support this study, we curate a new dataset
with 10,619 annotations labeled at the sentence-level from news articles of
different ideological leanings. We present a novel generative framework to
allow the generation of canonical names for entities as well as stances among
them. We further enhance the model with a graph encoder to summarize entity
activities and external knowledge surrounding the entities. Experiments show
that our model outperforms strong comparisons by large margins. Further
analyses demonstrate the usefulness of E2E stance detection for understanding
media quotation and stance landscape, as well as inferring entity ideology.
Related papers
- Entity Disambiguation via Fusion Entity Decoding [68.77265315142296]
We propose an encoder-decoder model to disambiguate entities with more detailed entity descriptions.
We observe +1.5% improvements in end-to-end entity linking in the GERBIL benchmark compared with EntQA.
arXiv Detail & Related papers (2024-04-02T04:27:54Z) - A Generative Approach for Wikipedia-Scale Visual Entity Recognition [56.55633052479446]
We address the task of mapping a given query image to one of the 6 million existing entities in Wikipedia.
We introduce a novel Generative Entity Recognition framework, which learns to auto-regressively decode a semantic and discriminative code'' identifying the target entity.
arXiv Detail & Related papers (2024-03-04T13:47:30Z) - Named Entity Recognition Under Domain Shift via Metric Learning for Life Sciences [55.185456382328674]
We investigate the applicability of transfer learning for enhancing a named entity recognition model.
Our model consists of two stages: 1) entity grouping in the source domain, which incorporates knowledge from annotated events to establish relations between entities, and 2) entity discrimination in the target domain, which relies on pseudo labeling and contrastive learning to enhance discrimination between the entities in the two domains.
arXiv Detail & Related papers (2024-01-19T03:49:28Z) - Unified Visual Relationship Detection with Vision and Language Models [89.77838890788638]
This work focuses on training a single visual relationship detector predicting over the union of label spaces from multiple datasets.
We propose UniVRD, a novel bottom-up method for Unified Visual Relationship Detection by leveraging vision and language models.
Empirical results on both human-object interaction detection and scene-graph generation demonstrate the competitive performance of our model.
arXiv Detail & Related papers (2023-03-16T00:06:28Z) - Exploiting Unlabeled Data with Vision and Language Models for Object
Detection [64.94365501586118]
Building robust and generic object detection frameworks requires scaling to larger label spaces and bigger training datasets.
We propose a novel method that leverages the rich semantics available in recent vision and language models to localize and classify objects in unlabeled images.
We demonstrate the value of the generated pseudo labels in two specific tasks, open-vocabulary detection and semi-supervised object detection.
arXiv Detail & Related papers (2022-07-18T21:47:15Z) - Learning Attention-based Representations from Multiple Patterns for
Relation Prediction in Knowledge Graphs [2.4028383570062606]
AEMP is a novel model for learning contextualized representations by acquiring entities' context information.
AEMP either outperforms or competes with state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-07T10:53:35Z) - Knowledge-Rich Self-Supervised Entity Linking [58.838404666183656]
Knowledge-RIch Self-Supervision ($tt KRISSBERT$) is a universal entity linker for four million UMLS entities.
Our approach subsumes zero-shot and few-shot methods, and can easily incorporate entity descriptions and gold mention labels if available.
Without using any labeled information, our method produces $tt KRISSBERT$, a universal entity linker for four million UMLS entities.
arXiv Detail & Related papers (2021-12-15T05:05:12Z) - Unsupervised Belief Representation Learning in Polarized Networks with
Information-Theoretic Variational Graph Auto-Encoders [26.640917190618612]
We develop an unsupervised algorithm for belief representation learning in polarized networks.
It learns to project both users and content items (e.g., posts that represent user views) into an appropriate disentangled latent space.
The latent representation of users and content can then be used to quantify their ideological leaning and detect/predict their stances on issues.
arXiv Detail & Related papers (2021-10-01T04:35:01Z) - KGSynNet: A Novel Entity Synonyms Discovery Framework with Knowledge
Graph [23.053995137917994]
We propose a novel entity synonyms discovery framework, named emphKGSynNet.
Specifically, we pre-train subword embeddings for mentions and entities using a large-scale domain-specific corpus.
We employ a specifically designed emphfusion gate to adaptively absorb the entities' knowledge information into their semantic features.
arXiv Detail & Related papers (2021-03-16T07:32:33Z) - XREF: Entity Linking for Chinese News Comments with Supplementary
Article Reference [19.811371589597382]
We study the problem of entity linking for Chinese news comments given mentions' spans.
We propose a novel model, XREF, that leverages attention mechanisms to pinpoint relevant context.
We develop a weakly supervised training scheme to utilize the large-scale unlabeled corpus.
arXiv Detail & Related papers (2020-06-24T19:42:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.