Quotation Recommendation and Interpretation Based on Transformation from
Queries to Quotations
- URL: http://arxiv.org/abs/2105.14189v2
- Date: Tue, 1 Jun 2021 06:07:23 GMT
- Title: Quotation Recommendation and Interpretation Based on Transformation from
Queries to Quotations
- Authors: Lingzhi Wang, Xingshan Zeng, Kam-Fai Wong
- Abstract summary: We introduce a transformation matrix that directly maps the query representations to quotation representations.
Experiments on two datasets in English and Chinese show that our model outperforms previous state-of-the-art models.
- Score: 17.011179660418538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To help individuals express themselves better, quotation recommendation is
receiving growing attention. Nevertheless, most prior efforts focus on modeling
quotations and queries separately and ignore the relationship between the
quotations and the queries. In this work, we introduce a transformation matrix
that directly maps the query representations to quotation representations. To
better learn the mapping relationship, we employ a mapping loss that minimizes
the distance of two semantic spaces (one for quotation and another for
mapped-query). Furthermore, we explore using the words in history queries to
interpret the figurative language of quotations, where quotation-aware
attention is applied on top of history queries to highlight the indicator
words. Experiments on two datasets in English and Chinese show that our model
outperforms previous state-of-the-art models.
Related papers
- What Makes an Ideal Quote? Recommending "Unexpected yet Rational" Quotations via Novelty [66.51974095399409]
We formalize quote recommendation as choosing contextually novel but semantically coherent quotations.<n>A generative label agent first interprets each quotation and its surrounding context into multi-dimensional deep-meaning labels.<n>A token-level novelty estimator then reranks candidates while mitigating auto-regressive continuation bias.
arXiv Detail & Related papers (2025-12-15T12:19:37Z) - Improving Retrieval-augmented Text-to-SQL with AST-based Ranking and Schema Pruning [10.731045939849125]
We focus on Text-to- semantic parsing from the perspective of retrieval-augmented generation.
Motivated by challenges related to the size of commercial database schemata and the deployability of business intelligence solutions, we propose $textASTReS$ that dynamically retrieves input database information.
arXiv Detail & Related papers (2024-07-03T15:55:14Z) - Adapting Dual-encoder Vision-language Models for Paraphrased Retrieval [55.90407811819347]
We consider the task of paraphrased text-to-image retrieval where a model aims to return similar results given a pair of paraphrased queries.
We train a dual-encoder model starting from a language model pretrained on a large text corpus.
Compared to public dual-encoder models such as CLIP and OpenCLIP, the model trained with our best adaptation strategy achieves a significantly higher ranking similarity for paraphrased queries.
arXiv Detail & Related papers (2024-05-06T06:30:17Z) - Cross-lingual Contextualized Phrase Retrieval [63.80154430930898]
We propose a new task formulation of dense retrieval, cross-lingual contextualized phrase retrieval.
We train our Cross-lingual Contextualized Phrase Retriever (CCPR) using contrastive learning.
On the phrase retrieval task, CCPR surpasses baselines by a significant margin, achieving a top-1 accuracy that is at least 13 points higher.
arXiv Detail & Related papers (2024-03-25T14:46:51Z) - Improving Automatic Quotation Attribution in Literary Novels [21.164701493247794]
Current models for quotation attribution in literary novels assume varying levels of available information in their training and test data.
We benchmark state-of-the-art models on each of these sub-tasks independently, using a large dataset of annotated coreferences and quotations in literary novels.
We also train and evaluate models for the speaker attribution task in particular, showing that a simple sequential prediction model achieves accuracy scores on par with state-of-the-art models.
arXiv Detail & Related papers (2023-07-07T17:37:01Z) - Design Choices for Crowdsourcing Implicit Discourse Relations: Revealing
the Biases Introduced by Task Design [23.632204469647526]
We show that the task design can push annotators towards certain relations.
We conclude that this type of bias should be taken into account when training and testing models.
arXiv Detail & Related papers (2023-04-03T09:04:18Z) - What Are You Token About? Dense Retrieval as Distributions Over the
Vocabulary [68.77983831618685]
We propose to interpret the vector representations produced by dual encoders by projecting them into the model's vocabulary space.
We show that the resulting projections contain rich semantic information, and draw connection between them and sparse retrieval.
arXiv Detail & Related papers (2022-12-20T16:03:25Z) - Probing Task-Oriented Dialogue Representation from Language Models [106.02947285212132]
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks.
We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way.
arXiv Detail & Related papers (2020-10-26T21:34:39Z) - Improving Image Captioning with Better Use of Captions [65.39641077768488]
We present a novel image captioning architecture to better explore semantics available in captions and leverage that to enhance both image representation and caption generation.
Our models first construct caption-guided visual relationship graphs that introduce beneficial inductive bias using weakly supervised multi-instance learning.
During generation, the model further incorporates visual relationships using multi-task learning for jointly predicting word and object/predicate tag sequences.
arXiv Detail & Related papers (2020-06-21T14:10:47Z) - Probing Contextual Language Models for Common Ground with Visual
Representations [76.05769268286038]
We design a probing model that evaluates how effective are text-only representations in distinguishing between matching and non-matching visual representations.
Our findings show that language representations alone provide a strong signal for retrieving image patches from the correct object categories.
Visually grounded language models slightly outperform text-only language models in instance retrieval, but greatly under-perform humans.
arXiv Detail & Related papers (2020-05-01T21:28:28Z) - Using Image Captions and Multitask Learning for Recommending Query
Reformulations [11.99358906295761]
We aim to enhance the query recommendation experience for a commercial image search engine.
Our proposed methodology incorporates current state-of-the-art practices from relevant literature.
arXiv Detail & Related papers (2020-03-02T08:22:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.