GCRE-GPT: A Generative Model for Comparative Relation Extraction
- URL: http://arxiv.org/abs/2303.08601v2
- Date: Thu, 27 Jun 2024 06:34:11 GMT
- Title: GCRE-GPT: A Generative Model for Comparative Relation Extraction
- Authors: Yequan Wang, Hengran Zhang, Aixin Sun, Xuying Meng,
- Abstract summary: Given comparative text, comparative relation extraction aims to extract two targets in comparison and the aspect they are compared for.
Existing solutions formulate this task as a sequence labeling task, to extract targets and aspects.
We show that comparative relations can be directly extracted with high accuracy, by generative model.
- Score: 47.69464882382656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given comparative text, comparative relation extraction aims to extract two targets (\eg two cameras) in comparison and the aspect they are compared for (\eg image quality). The extracted comparative relations form the basis of further opinion analysis.Existing solutions formulate this task as a sequence labeling task, to extract targets and aspects. However, they cannot directly extract comparative relation(s) from text. In this paper, we show that comparative relations can be directly extracted with high accuracy, by generative model. Based on GPT-2, we propose a Generation-based Comparative Relation Extractor (GCRE-GPT). Experiment results show that \modelname achieves state-of-the-art accuracy on two datasets.
Related papers
- Modeling Comparative Logical Relation with Contrastive Learning for Text Generation [43.814189025925096]
We introduce a new D2T task named comparative logical relation generation (CLRG)
We propose a Comparative Logic (CoLo) based text generation method, which generates texts following specific comparative logical relations with contrastive learning.
Our method achieves impressive performance in both automatic and human evaluations.
arXiv Detail & Related papers (2024-06-13T13:25:50Z) - Image Similarity using An Ensemble of Context-Sensitive Models [2.9490616593440317]
We present a more intuitive approach to build and compare image similarity models based on labelled data.
We address the challenges of sparse sampling in the image space (R, A, B) and biases in the models trained with context-based data.
Our testing results show that the ensemble model constructed performs 5% better than the best individual context-sensitive models.
arXiv Detail & Related papers (2024-01-15T20:23:05Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Sequence Generation with Label Augmentation for Relation Extraction [17.38986046630852]
We propose Relation Extraction with Label Augmentation (RELA), a Seq2Seq model with automatic label augmentation for relation extraction.
Experimental results show RELA achieves competitive results compared with previous methods on four RE datasets.
arXiv Detail & Related papers (2022-12-29T11:28:05Z) - Duality-Induced Regularizer for Semantic Matching Knowledge Graph
Embeddings [70.390286614242]
We propose a novel regularizer -- namely, DUality-induced RegulArizer (DURA) -- which effectively encourages the entities with similar semantics to have similar embeddings.
Experiments demonstrate that DURA consistently and significantly improves the performance of state-of-the-art semantic matching models.
arXiv Detail & Related papers (2022-03-24T09:24:39Z) - Relation Regularized Scene Graph Generation [206.76762860019065]
Scene graph generation (SGG) is built on top of detected objects to predict object pairwise visual relations.
We propose a relation regularized network (R2-Net) which can predict whether there is a relationship between two objects.
Our R2-Net can effectively refine object labels and generate scene graphs.
arXiv Detail & Related papers (2022-02-22T11:36:49Z) - Learning to Synthesize Data for Semantic Parsing [57.190817162674875]
We propose a generative model which models the composition of programs and maps a program to an utterance.
Due to the simplicity of PCFG and pre-trained BART, our generative model can be efficiently learned from existing data at hand.
We evaluate our method in both in-domain and out-of-domain settings of text-to-Query parsing on the standard benchmarks of GeoQuery and Spider.
arXiv Detail & Related papers (2021-04-12T21:24:02Z) - Comparative Analysis of N-gram Text Representation on Igbo Text Document
Similarity [0.0]
The improvement in Information Technology has encouraged the use of Igbo in the creation of text such as resources and news articles online.
It adopted Euclidean similarity measure to determine the similarities between Igbo text documents represented with two word-based n-gram text representation (unigram and bigram) models.
arXiv Detail & Related papers (2020-04-01T12:24:47Z) - Preference Modeling with Context-Dependent Salient Features [12.403492796441434]
We consider the problem of estimating a ranking on a set of items from noisy pairwise comparisons given item features.
Our key observation is that two items compared in isolation from other items may be compared based on only a salient subset of features.
arXiv Detail & Related papers (2020-02-22T04:05:16Z) - On the Discrepancy between Density Estimation and Sequence Generation [92.70116082182076]
log-likelihood is highly correlated with BLEU when we consider models within the same family.
We observe no correlation between rankings of models across different families.
arXiv Detail & Related papers (2020-02-17T20:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.