Click-Feedback Retrieval
- URL: http://arxiv.org/abs/2305.00052v1
- Date: Fri, 28 Apr 2023 19:03:03 GMT
- Title: Click-Feedback Retrieval
- Authors: Zeyu Wang, Yu Wu
- Abstract summary: We study a setting where the feedback is provided through users clicking liked and disliked searching results.
We construct a new benchmark termed click-feedback retrieval based on a large-scale dataset in fashion domain.
- Score: 10.203235400791845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Retrieving target information based on input query is of fundamental
importance in many real-world applications. In practice, it is not uncommon for
the initial search to fail, where additional feedback information is needed to
guide the searching process. In this work, we study a setting where the
feedback is provided through users clicking liked and disliked searching
results. We believe this form of feedback is of great practical interests for
its convenience and efficiency. To facilitate future work in this direction, we
construct a new benchmark termed click-feedback retrieval based on a
large-scale dataset in fashion domain. We demonstrate that incorporating
click-feedback can drastically improve the retrieval performance, which
validates the value of the proposed setting. We also introduce several methods
to utilize click-feedback during training, and show that click-feedback-guided
training can significantly enhance the retrieval quality. We hope further
exploration in this direction can bring new insights on building more efficient
and user-friendly search engines.
Related papers
- CueLearner: Bootstrapping and local policy adaptation from relative feedback [31.015306281489327]
Relative feedback offers balance between usability and information richness.<n>Previous research has shown that relative feedback can be used to enhance policy search methods.<n>We introduce a novel method to learn from relative feedback and combine it with off-policy reinforcement learning.
arXiv Detail & Related papers (2025-07-07T07:54:28Z) - NExT-Search: Rebuilding User Feedback Ecosystem for Generative AI Search [108.42163676745085]
We envision NExT-Search, a next-generation paradigm designed to reintroduce fine-grained, process-level feedback into generative AI search.<n> NExT-Search integrates two complementary modes: User Debug Mode, which allows engaged users to intervene at key stages; and Shadow User Mode, where a personalized user agent simulates user preferences.
arXiv Detail & Related papers (2025-05-20T17:59:13Z) - iEBAKER: Improved Remote Sensing Image-Text Retrieval Framework via Eliminate Before Align and Keyword Explicit Reasoning [80.44805667907612]
iEBAKER is an innovative strategy to filter weakly correlated sample pairs.
We introduce an alternative Sort After Reversed Retrieval (SAR) strategy.
We incorporate a Keyword Explicit Reasoning (KER) module to facilitate the beneficial impact of subtle key concept distinctions.
arXiv Detail & Related papers (2025-04-08T03:40:19Z) - Interactive Visualization Recommendation with Hier-SUCB [52.11209329270573]
We propose an interactive personalized visualization recommendation (PVisRec) system that learns on user feedback from previous interactions.
For more interactive and accurate recommendations, we propose Hier-SUCB, a contextual semi-bandit in the PVisRec setting.
arXiv Detail & Related papers (2025-02-05T17:14:45Z) - Adaptive Querying for Reward Learning from Human Feedback [5.587293092389789]
Learning from human feedback is a popular approach to train robots to adapt to user preferences and improve safety.
We examine how to learn a penalty function associated with unsafe behaviors, such as side effects, using multiple forms of human feedback.
We employ an iterative, two-phase approach which first selects critical states for querying, and then uses information gain to select a feedback format for querying.
arXiv Detail & Related papers (2024-12-11T00:02:48Z) - Performance Evaluation in Multimedia Retrieval [7.801919915773585]
Performance evaluation in multimedia retrieval relies heavily on retrieval experiments.
These can involve human-in-the-loop and machine-only settings for the retrieval process itself and the subsequent verification of results.
We present a formal model to express all relevant aspects of such retrieval experiments, as well as a flexible open-source evaluation infrastructure.
arXiv Detail & Related papers (2024-10-09T08:06:15Z) - Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Relevance feedback strategies for recall-oriented neural information
retrieval [0.0]
This research proposes a more recall-oriented approach to reducing review effort.
More specifically, through iteratively re-ranking the relevance rankings based on user feedback.
Our results show that this method can reduce review effort between 17.85% and 59.04%, compared to a baseline approach.
arXiv Detail & Related papers (2023-11-25T19:50:41Z) - Exploiting Correlated Auxiliary Feedback in Parameterized Bandits [56.84649080789685]
We study a novel variant of the parameterized bandits problem in which the learner can observe additional auxiliary feedback that is correlated with the observed reward.
The auxiliary feedback is readily available in many real-life applications, e.g., an online platform that wants to recommend the best-rated services to its users can observe the user's rating of service (rewards) and collect additional information like service delivery time (auxiliary feedback)
arXiv Detail & Related papers (2023-11-05T17:27:06Z) - A Deep Reinforcement Learning Approach for Interactive Search with
Sentence-level Feedback [12.712416630402119]
Interactive search can provide a better experience by incorporating interaction feedback from the users.
Existing state-of-the-art (SOTA) systems use reinforcement learning (RL) models to incorporate the interactions.
Yet such feedback requires extensive RL action space exploration and large amounts of annotated data.
This work proposes a new deep Q-learning (DQ) approach, DQrank.
arXiv Detail & Related papers (2023-10-03T18:45:21Z) - Incorporating Relevance Feedback for Information-Seeking Retrieval using
Few-Shot Document Re-Ranking [56.80065604034095]
We introduce a kNN approach that re-ranks documents based on their similarity with the query and the documents the user considers relevant.
To evaluate our different integration strategies, we transform four existing information retrieval datasets into the relevance feedback scenario.
arXiv Detail & Related papers (2022-10-19T16:19:37Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - Improving Rating and Relevance with Point-of-Interest Recommender System [0.0]
We develop a deep neural network architecture to model query-item relevance in the presence of both collaborative and content information.
The application of these learned representations to a large-scale dataset resulted in significant improvements.
arXiv Detail & Related papers (2022-02-17T16:43:17Z) - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via
Relabeling Experience and Unsupervised Pre-training [94.87393610927812]
We present an off-policy, interactive reinforcement learning algorithm that capitalizes on the strengths of both feedback and off-policy learning.
We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods.
arXiv Detail & Related papers (2021-06-09T14:10:50Z) - Information Directed Reward Learning for Reinforcement Learning [64.33774245655401]
We learn a model of the reward function that allows standard RL algorithms to achieve high expected return with as few expert queries as possible.
In contrast to prior active reward learning methods designed for specific types of queries, IDRL naturally accommodates different query types.
We support our findings with extensive evaluations in multiple environments and with different types of queries.
arXiv Detail & Related papers (2021-02-24T18:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.