Comparative Snippet Generation
- URL: http://arxiv.org/abs/2206.05473v1
- Date: Sat, 11 Jun 2022 09:02:27 GMT
- Title: Comparative Snippet Generation
- Authors: Saurabh Jain, Yisong Miao, Min-Yen Kan
- Abstract summary: We generate a single-sentence, comparative response from a given positive and a negative opinion.
We contribute the first dataset for this task, and a performance analysis of a pre-trained BERT model to generate such snippets.
- Score: 18.920306511866553
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We model product reviews to generate comparative responses consisting of
positive and negative experiences regarding the product. Specifically, we
generate a single-sentence, comparative response from a given positive and a
negative opinion. We contribute the first dataset for this task of Comparative
Snippet Generation from contrasting opinions regarding a product, and a
performance analysis of a pre-trained BERT model to generate such snippets.
Related papers
- AMRFact: Enhancing Summarization Factuality Evaluation with AMR-Driven Negative Samples Generation [57.8363998797433]
We propose AMRFact, a framework that generates perturbed summaries using Abstract Meaning Representations (AMRs)
Our approach parses factually consistent summaries into AMR graphs and injects controlled factual inconsistencies to create negative examples, allowing for coherent factually inconsistent summaries to be generated with high error-type coverage.
arXiv Detail & Related papers (2023-11-16T02:56:29Z) - Mitigating Pooling Bias in E-commerce Search via False Negative Estimation [25.40402675846542]
Bias-mitigating Hard Negative Sampling is a novel negative sampling strategy tailored to identify and adjust for false negatives.
Our experiments in the search setting confirm BHNS as effective for practical e-commerce use.
arXiv Detail & Related papers (2023-11-11T00:22:57Z) - Comparing Apples to Apples: Generating Aspect-Aware Comparative
Sentences from User Reviews [6.428416845132992]
We show that our pipeline generates fluent and diverse comparative sentences.
We run experiments on the relevance and fidelity of our generated sentences in a human evaluation study.
arXiv Detail & Related papers (2023-07-05T23:19:18Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Feature Extraction Framework based on Contrastive Learning with Adaptive
Positive and Negative Samples [1.4467794332678539]
framework is suitable for unsupervised, supervised, and semi-supervised single-view feature extraction.
CL-FEFA constructs adaptively the positive and negative samples from the results of feature extraction.
CL-FEFA considers the mutual information between positive samples, that is, similar samples in potential structures, which provides theoretical support for its advantages in feature extraction.
arXiv Detail & Related papers (2022-01-11T13:34:03Z) - An Evaluation Study of Generative Adversarial Networks for Collaborative
Filtering [75.83628561622287]
This work successfully replicates the results published in the original paper and discusses the impact of certain differences between the CFGAN framework and the model used in the original evaluation.
The work further expands the experimental analysis comparing CFGAN against a selection of simple and well-known properly optimized baselines, observing that CFGAN is not consistently competitive against them despite its high computational cost.
arXiv Detail & Related papers (2022-01-05T20:53:27Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Contrastive Learning with Adversarial Perturbations for Conditional Text
Generation [49.055659008469284]
We propose a principled method to generate positive and negative samples for contrastive learning of seq2seq models.
Specifically, we generate negative examples by adding small perturbations to the input sequence to minimize its conditional likelihood.
We empirically show that our proposed method significantly improves the generalization of the seq2seq on three text generation tasks.
arXiv Detail & Related papers (2020-12-14T06:20:27Z) - Adversarial learning for product recommendation [0.0]
This work proposes a conditional, coupled generative adversarial network (RecommenderGAN) that learns to produce samples from a joint distribution between (view, buy) behaviors.
Our results are preliminary, however they suggest that the recommendations produced by the model may provide utility for consumers and digital retailers.
arXiv Detail & Related papers (2020-07-07T23:35:36Z) - Self-Adversarial Learning with Comparative Discrimination for Text
Generation [111.18614166615968]
We propose a novel self-adversarial learning (SAL) paradigm for improving GANs' performance in text generation.
During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples.
Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity.
arXiv Detail & Related papers (2020-01-31T07:50:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.