Open vs Closed-ended questions in attitudinal surveys -- comparing,
combining, and interpreting using natural language processing
- URL: http://arxiv.org/abs/2205.01317v1
- Date: Tue, 3 May 2022 06:01:03 GMT
- Title: Open vs Closed-ended questions in attitudinal surveys -- comparing,
combining, and interpreting using natural language processing
- Authors: Vishnu Baburajan, Jo\~ao de Abreu e Silva, Francisco Camara Pereira
- Abstract summary: Topic Modeling could significantly reduce the time to extract information from open-ended responses.
Our research uses Topic Modeling to extract information from open-ended questions and compare its performance with closed-ended responses.
- Score: 3.867363075280544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To improve the traveling experience, researchers have been analyzing the role
of attitudes in travel behavior modeling. Although most researchers use
closed-ended surveys, the appropriate method to measure attitudes is debatable.
Topic Modeling could significantly reduce the time to extract information from
open-ended responses and eliminate subjective bias, thereby alleviating analyst
concerns. Our research uses Topic Modeling to extract information from
open-ended questions and compare its performance with closed-ended responses.
Furthermore, some respondents might prefer answering questions using their
preferred questionnaire type. So, we propose a modeling framework that allows
respondents to use their preferred questionnaire type to answer the survey and
enable analysts to use the modeling frameworks of their choice to predict
behavior. We demonstrate this using a dataset collected from the USA that
measures the intention to use Autonomous Vehicles for commute trips.
Respondents were presented with alternative questionnaire versions (open- and
closed- ended). Since our objective was also to compare the performance of
alternative questionnaire versions, the survey was designed to eliminate
influences resulting from statements, behavioral framework, and the choice
experiment. Results indicate the suitability of using Topic Modeling to extract
information from open-ended responses; however, the models estimated using the
closed-ended questions perform better compared to them. Besides, the proposed
model performs better compared to the models used currently. Furthermore, our
proposed framework will allow respondents to choose the questionnaire type to
answer, which could be particularly beneficial to them when using voice-based
surveys.
Related papers
- Promoting Open-domain Dialogue Generation through Learning Pattern
Information between Contexts and Responses [5.936682548344234]
This paper improves the quality of generated responses by learning the implicit pattern information between contexts and responses in the training samples.
We also design a response-aware mechanism for mining the implicit pattern information between contexts and responses so that the generated replies are more diverse and approximate to human replies.
arXiv Detail & Related papers (2023-09-06T08:11:39Z) - Predicting Survey Response with Quotation-based Modeling: A Case Study
on Favorability towards the United States [0.0]
We propose a pioneering approach for predicting survey responses by examining quotations using machine learning.
We leverage a vast corpus of quotations from individuals across different nationalities to extract their level of favorability.
We employ a combination of natural language processing techniques and machine learning algorithms to construct a predictive model for survey responses.
arXiv Detail & Related papers (2023-05-23T14:11:01Z) - Connecting Humanities and Social Sciences: Applying Language and Speech
Technology to Online Panel Surveys [2.0646127669654835]
We explore the application of language and speech technology to open-ended questions in a Dutch panel survey.
In an experimental wave respondents could choose to answer open questions via speech or keyboard.
We report the errors the ASR system produces and investigate the impact of these errors on downstream analyses.
arXiv Detail & Related papers (2023-02-21T10:52:15Z) - Realistic Conversational Question Answering with Answer Selection based
on Calibrated Confidence and Uncertainty Measurement [54.55643652781891]
Conversational Question Answering (ConvQA) models aim at answering a question with its relevant paragraph and previous question-answer pairs that occurred during conversation multiple times.
We propose to filter out inaccurate answers in the conversation history based on their estimated confidences and uncertainties from the ConvQA model.
We validate our models, Answer Selection-based realistic Conversation Question Answering, on two standard ConvQA datasets.
arXiv Detail & Related papers (2023-02-10T09:42:07Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Predicting Census Survey Response Rates With Parsimonious Additive
Models and Structured Interactions [14.003044924094597]
We consider the problem of predicting survey response rates using a family of flexible and interpretable nonparametric models.
The study is motivated by the US Census Bureau's well-known ROAM application.
arXiv Detail & Related papers (2021-08-24T17:49:55Z) - On the Efficacy of Adversarial Data Collection for Question Answering:
Results from a Large-Scale Randomized Study [65.17429512679695]
In adversarial data collection (ADC), a human workforce interacts with a model in real time, attempting to produce examples that elicit incorrect predictions.
Despite ADC's intuitive appeal, it remains unclear when training on adversarial datasets produces more robust models.
arXiv Detail & Related papers (2021-06-02T00:48:33Z) - Generative Context Pair Selection for Multi-hop Question Answering [60.74354009152721]
We propose a generative context selection model for multi-hop question answering.
Our proposed generative passage selection model has a better performance (4.9% higher than baseline) on adversarial held-out set.
arXiv Detail & Related papers (2021-04-18T07:00:48Z) - Predicting respondent difficulty in web surveys: A machine-learning
approach based on mouse movement features [3.6944296923226316]
This paper explores the predictive value of mouse-tracking data with regard to respondents' difficulty.
We use data from a survey on respondents' employment history and demographic information.
We develop a personalization method that adjusts for respondents' baseline mouse behavior and evaluate its performance.
arXiv Detail & Related papers (2020-11-05T10:54:33Z) - A Revised Generative Evaluation of Visual Dialogue [80.17353102854405]
We propose a revised evaluation scheme for the VisDial dataset.
We measure consensus between answers generated by the model and a set of relevant answers.
We release these sets and code for the revised evaluation scheme as DenseVisDial.
arXiv Detail & Related papers (2020-04-20T13:26:45Z) - Improving Multi-Turn Response Selection Models with Complementary
Last-Utterance Selection by Instance Weighting [84.9716460244444]
We consider utilizing the underlying correlation in the data resource itself to derive different kinds of supervision signals.
We conduct extensive experiments in two public datasets and obtain significant improvement in both datasets.
arXiv Detail & Related papers (2020-02-18T06:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.