FeedbackMap: a tool for making sense of open-ended survey responses
- URL: http://arxiv.org/abs/2306.15112v1
- Date: Mon, 26 Jun 2023 23:38:24 GMT
- Title: FeedbackMap: a tool for making sense of open-ended survey responses
- Authors: Doug Beeferman, Nabeel Gillani
- Abstract summary: This demo introduces FeedbackMap, a web-based tool that uses natural language processing techniques to facilitate the analysis of open-ended survey responses.
We discuss the importance of examining survey results from multiple perspectives and the potential biases introduced by summarization methods.
- Score: 1.0660480034605242
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Analyzing open-ended survey responses is a crucial yet challenging task for
social scientists, non-profit organizations, and educational institutions, as
they often face the trade-off between obtaining rich data and the burden of
reading and coding textual responses. This demo introduces FeedbackMap, a
web-based tool that uses natural language processing techniques to facilitate
the analysis of open-ended survey responses. FeedbackMap lets researchers
generate summaries at multiple levels, identify interesting response examples,
and visualize the response space through embeddings. We discuss the importance
of examining survey results from multiple perspectives and the potential biases
introduced by summarization methods, emphasizing the need for critical
evaluation of the representation and omission of respondent voices.
Related papers
- Joint Learning of Context and Feedback Embeddings in Spoken Dialogue [3.8673630752805446]
We investigate the possibility of embedding short dialogue contexts and feedback responses in the same representation space using a contrastive learning objective.
Our results show that the model outperforms humans given the same ranking task and that the learned embeddings carry information about the conversational function of feedback responses.
arXiv Detail & Related papers (2024-06-11T14:22:37Z) - ExpertQA: Expert-Curated Questions and Attributed Answers [51.68314045809179]
We conduct human evaluation of responses from a few representative systems along various axes of attribution and factuality.
We collect expert-curated questions from 484 participants across 32 fields of study, and then ask the same experts to evaluate generated responses to their own questions.
The output of our analysis is ExpertQA, a high-quality long-form QA dataset with 2177 questions spanning 32 fields, along with verified answers and attributions for claims in the answers.
arXiv Detail & Related papers (2023-09-14T16:54:34Z) - A Survey on Interpretable Cross-modal Reasoning [64.37362731950843]
Cross-modal reasoning (CMR) has emerged as a pivotal area with applications spanning from multimedia analysis to healthcare diagnostics.
This survey delves into the realm of interpretable cross-modal reasoning (I-CMR)
This survey presents a comprehensive overview of the typical methods with a three-level taxonomy for I-CMR.
arXiv Detail & Related papers (2023-09-05T05:06:48Z) - An Integrated NPL Approach to Sentiment Analysis in Satisfaction Surveys [0.0]
The research project aims to apply an integrated approach to natural language processing NLP to satisfaction surveys.
It will focus on understanding and extracting relevant information from survey responses, analyzing feelings, and identifying recurring word patterns.
arXiv Detail & Related papers (2023-07-18T00:23:35Z) - Connecting Humanities and Social Sciences: Applying Language and Speech
Technology to Online Panel Surveys [2.0646127669654835]
We explore the application of language and speech technology to open-ended questions in a Dutch panel survey.
In an experimental wave respondents could choose to answer open questions via speech or keyboard.
We report the errors the ASR system produces and investigate the impact of these errors on downstream analyses.
arXiv Detail & Related papers (2023-02-21T10:52:15Z) - AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer
Summarization [73.91543616777064]
Community Question Answering (CQA) fora such as Stack Overflow and Yahoo! Answers contain a rich resource of answers to a wide range of community-based questions.
One goal of answer summarization is to produce a summary that reflects the range of answer perspectives.
This work introduces a novel dataset of 4,631 CQA threads for answer summarization, curated by professional linguists.
arXiv Detail & Related papers (2021-11-11T21:48:02Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Counterfactual Off-Policy Training for Neural Response Generation [94.76649147381232]
We propose to explore potential responses by counterfactual reasoning.
Training on the counterfactual responses under the adversarial learning framework helps to explore the high-reward area of the potential response space.
An empirical study on the DailyDialog dataset shows that our approach significantly outperforms the HRED model.
arXiv Detail & Related papers (2020-04-29T22:46:28Z) - A Revised Generative Evaluation of Visual Dialogue [80.17353102854405]
We propose a revised evaluation scheme for the VisDial dataset.
We measure consensus between answers generated by the model and a set of relevant answers.
We release these sets and code for the revised evaluation scheme as DenseVisDial.
arXiv Detail & Related papers (2020-04-20T13:26:45Z) - Gated Convolutional Bidirectional Attention-based Model for Off-topic
Spoken Response Detection [10.321357718530473]
We propose a novel approach for off-topic spoken response detection with high off-topic recall on both seen and unseen prompts.
We introduce a new model, Gated Convolutional Bidirectional Attention-based Model (GCBiA), which applies bi-attention mechanism and convolutions to extract topic words of prompts and key-phrases of responses.
arXiv Detail & Related papers (2020-04-20T03:16:06Z) - Review-guided Helpful Answer Identification in E-commerce [38.276241153439955]
Product-specific community question answering platforms can greatly help address the concerns of potential customers.
The user-provided answers on such platforms often vary a lot in their qualities.
Helpfulness votes from the community can indicate the overall quality of the answer, but they are often missing.
arXiv Detail & Related papers (2020-03-13T11:34:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.