Crowdsourced Adaptive Surveys
- URL: http://arxiv.org/abs/2401.12986v1
- Date: Tue, 16 Jan 2024 04:05:25 GMT
- Title: Crowdsourced Adaptive Surveys
- Authors: Yamil Velez
- Abstract summary: This paper introduces a crowdsourced adaptive survey methodology (CSAS)
The method converts open-ended text provided by participants into Likert-style items.
It applies a multi-armed bandit algorithm to determine user-provided questions that should be prioritized in the survey.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Public opinion surveys are vital for informing democratic decision-making,
but responding to rapidly changing information environments and measuring
beliefs within niche communities can be challenging for traditional survey
methods. This paper introduces a crowdsourced adaptive survey methodology
(CSAS) that unites advances in natural language processing and adaptive
algorithms to generate question banks that evolve with user input. The CSAS
method converts open-ended text provided by participants into Likert-style
items and applies a multi-armed bandit algorithm to determine user-provided
questions that should be prioritized in the survey. The method's adaptive
nature allows for the exploration of new survey questions, while imposing
minimal costs in survey length. Applications in the domains of Latino
information environments and issue importance showcase CSAS's ability to
identify claims or issues that might otherwise be difficult to track using
standard approaches. I conclude by discussing the method's potential for
studying topics where participant-generated content might improve our
understanding of public opinion.
Related papers
- AutoSurvey: Large Language Models Can Automatically Write Surveys [77.0458309675818]
This paper introduces AutoSurvey, a speedy and well-organized methodology for automating the creation of comprehensive literature surveys.
Traditional survey paper creation faces challenges due to the vast volume and complexity of information.
Our contributions include a comprehensive solution to the survey problem, a reliable evaluation method, and experimental validation demonstrating AutoSurvey's effectiveness.
arXiv Detail & Related papers (2024-06-10T12:56:06Z) - CLARINET: Augmenting Language Models to Ask Clarification Questions for Retrieval [52.134133938779776]
We present CLARINET, a system that asks informative clarification questions by choosing questions whose answers would maximize certainty in the correct candidate.
Our approach works by augmenting a large language model (LLM) to condition on a retrieval distribution, finetuning end-to-end to generate the question that would have maximized the rank of the true candidate at each turn.
arXiv Detail & Related papers (2024-04-28T18:21:31Z) - Fast and Adaptive Questionnaires for Voting Advice Applications [6.91754515704176]
This work introduces an adaptive questionnaire approach that selects subsequent questions based on users' previous answers.
We validated our approach using the Smartvote dataset from the Swiss Federal elections in 2019.
Our findings indicate that employing the I model both as encoder and decoder, combined with a PosteriorRMSE method for question selection, significantly improves the accuracy of recommendations.
arXiv Detail & Related papers (2024-04-02T11:55:50Z) - PAQA: Toward ProActive Open-Retrieval Question Answering [34.883834970415734]
This work aims to tackle the challenge of generating relevant clarifying questions by taking into account the inherent ambiguities present in both user queries and documents.
We propose PAQA, an extension to the existing AmbiNQ dataset, incorporating clarifying questions.
We then evaluate various models and assess how passage retrieval impacts ambiguity detection and the generation of clarifying questions.
arXiv Detail & Related papers (2024-02-26T14:40:34Z) - Asking Multimodal Clarifying Questions in Mixed-Initiative
Conversational Search [89.1772985740272]
In mixed-initiative conversational search systems, clarifying questions are used to help users who struggle to express their intentions in a single query.
We hypothesize that in scenarios where multimodal information is pertinent, the clarification process can be improved by using non-textual information.
We collect a dataset named Melon that contains over 4k multimodal clarifying questions, enriched with over 14k images.
Several analyses are conducted to understand the importance of multimodal contents during the query clarification phase.
arXiv Detail & Related papers (2024-02-12T16:04:01Z) - What should I Ask: A Knowledge-driven Approach for Follow-up Questions
Generation in Conversational Surveys [63.51903260461746]
We propose a novel task for knowledge-driven follow-up question generation in conversational surveys.
We constructed a new human-annotated dataset of human-written follow-up questions with dialogue history and labeled knowledge.
We then propose a two-staged knowledge-driven model for the task, which generates informative and coherent follow-up questions.
arXiv Detail & Related papers (2022-05-23T00:57:33Z) - Open vs Closed-ended questions in attitudinal surveys -- comparing,
combining, and interpreting using natural language processing [3.867363075280544]
Topic Modeling could significantly reduce the time to extract information from open-ended responses.
Our research uses Topic Modeling to extract information from open-ended questions and compare its performance with closed-ended responses.
arXiv Detail & Related papers (2022-05-03T06:01:03Z) - Advances and Challenges in Conversational Recommender Systems: A Survey [133.93908165922804]
We provide a systematic review of the techniques used in current conversational recommender systems (CRSs)
We summarize the key challenges of developing CRSs into five directions.
These research directions involve multiple research fields like information retrieval (IR), natural language processing (NLP), and human-computer interaction (HCI)
arXiv Detail & Related papers (2021-01-23T08:53:15Z) - Questionnaire analysis to define the most suitable survey for port-noise
investigation [0.0]
The paper analyses a sample of questions suitable for the specific research, chosen as part of the wide database of questionnaires internationally proposed for subjective investigations.
The questionnaire will be optimized to be distributed in the TRIPLO project (TRansports and Innovative sustainable connections between Ports and LOgistic platforms)
arXiv Detail & Related papers (2020-07-14T08:52:55Z) - Visual Question Answering with Prior Class Semantics [50.845003775809836]
We show how to exploit additional information pertaining to the semantics of candidate answers.
We extend the answer prediction process with a regression objective in a semantic space.
Our method brings improvements in consistency and accuracy over a range of question types.
arXiv Detail & Related papers (2020-05-04T02:46:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.