Mining Reddit Data to Elicit Students' Requirements During COVID-19
Pandemic
- URL: http://arxiv.org/abs/2307.14212v1
- Date: Wed, 26 Jul 2023 14:26:16 GMT
- Title: Mining Reddit Data to Elicit Students' Requirements During COVID-19
Pandemic
- Authors: Shadikur Rahman, Faiz Ahmed, Maleknaz Nayebi
- Abstract summary: We propose a shift in requirements elicitation, focusing on gathering feedback related to the problem itself.
We conducted a case study on student requirements during the COVID-19 pandemic in a higher education institution.
We employed multiple machine-learning and natural language processing techniques to identify requirement sentences.
- Score: 2.5475486924467075
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data-driven requirements engineering leverages the abundance of openly
accessible and crowdsourced information on the web. By incorporating user
feedback provided about a software product, such as reviews in mobile app
stores, these approaches facilitate the identification of issues, bug fixes,
and implementation of change requests. However, relying solely on user feedback
about a software product limits the possibility of eliciting all requirements,
as users may not always have a clear understanding of their exact needs from
the software, despite their wealth of experience with the problem, event, or
challenges they encounter and use the software to assist them. In this study,
we propose a shift in requirements elicitation, focusing on gathering feedback
related to the problem itself rather than relying solely on feedback about the
software product. We conducted a case study on student requirements during the
COVID-19 pandemic in a higher education institution. We gathered their
communications from Reddit during the pandemic and employed multiple
machine-learning and natural language processing techniques to identify
requirement sentences. We achieved the F-score of 0.79 using Naive Bayes with
TF-IDF when benchmarking multiple techniques. The results lead us to believe
that mining requirements from communication about a problem are feasible. While
we present the preliminary results, we envision a future where these
requirements complement conventionally elicited requirements and help to close
the requirements gap.
Related papers
- On the Automated Processing of User Feedback [7.229732269884235]
User feedback is an increasingly important source of information for requirements engineering, user interface design, and software engineering.
To tap the full potential of feedback, there are two main challenges that need to be solved.
Vendors must cope with a large quantity of feedback data, which is hard to manage manually.
Second, vendors must also cope with a varying quality of feedback as some items might be uninformative, repetitive, or simply wrong.
arXiv Detail & Related papers (2024-07-22T10:13:13Z) - Towards Extracting Ethical Concerns-related Software Requirements from App Reviews [0.0]
This study analyzes app reviews of the Uber mobile application (a popular taxi/ride app)
We propose a novel approach that leverages a knowledge graph (KG) model to extract software requirements from app reviews.
Our framework consists of three main components: developing an ontology with relevant entities and relations, extracting key entities from app reviews, and creating connections between them.
arXiv Detail & Related papers (2024-07-19T04:50:32Z) - Can Large Language Models Replicate ITS Feedback on Open-Ended Math Questions? [3.7399138244928145]
We study the capabilities of large language models to generate feedback for open-ended math questions.
We find that open-source and proprietary models both show promise in replicating the feedback they see during training, but do not generalize well to previously unseen student errors.
arXiv Detail & Related papers (2024-05-10T11:53:53Z) - Status Quo and Problems of Requirements Engineering for Machine
Learning: Results from an International Survey [7.164324501049983]
Requirements Engineering (RE) can help address many problems when engineering Machine Learning-enabled systems.
We conducted a survey to gather practitioner insights into the status quo and problems of RE in ML-enabled systems.
We found significant differences in RE practices within ML projects.
arXiv Detail & Related papers (2023-10-10T15:53:50Z) - Requirements' Characteristics: How do they Impact on Project Budget in a
Systems Engineering Context? [3.2872885101161318]
Controlling and assuring the quality of natural language requirements (NLRs) is challenging.
We investigated with the Swedish Transportation Agency (STA) to what extent the characteristics of requirements had an influence on change requests and budget changes in the project.
arXiv Detail & Related papers (2023-10-02T17:53:54Z) - Machine Unlearning: A Survey [56.79152190680552]
A special need has arisen where, due to privacy, usability, and/or the right to be forgotten, information about some specific samples needs to be removed from a model, called machine unlearning.
This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality.
No study has analyzed this complex topic or compared the feasibility of existing unlearning solutions in different kinds of scenarios.
The survey concludes by highlighting some of the outstanding issues with unlearning techniques, along with some feasible directions for new research opportunities.
arXiv Detail & Related papers (2023-06-06T10:18:36Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task
Feasibility in Interactive Visual Environments [54.405920619915655]
We introduce Mobile app Tasks with Iterative Feedback (MoTIF), a dataset with natural language commands for the greatest number of interactive environments to date.
MoTIF is the first to contain natural language requests for interactive environments that are not satisfiable.
We perform initial feasibility classification experiments and only reach an F1 score of 37.3, verifying the need for richer vision-language representations.
arXiv Detail & Related papers (2021-04-17T14:48:02Z) - Online Learning Demands in Max-min Fairness [91.37280766977923]
We describe mechanisms for the allocation of a scarce resource among multiple users in a way that is efficient, fair, and strategy-proof.
The mechanism is repeated for multiple rounds and a user's requirements can change on each round.
At the end of each round, users provide feedback about the allocation they received, enabling the mechanism to learn user preferences over time.
arXiv Detail & Related papers (2020-12-15T22:15:20Z) - Mining Implicit Relevance Feedback from User Behavior for Web Question
Answering [92.45607094299181]
We make the first study to explore the correlation between user behavior and passage relevance.
Our approach significantly improves the accuracy of passage ranking without extra human labeled data.
In practice, this work has proved effective to substantially reduce the human labeling cost for the QA service in a global commercial search engine.
arXiv Detail & Related papers (2020-06-13T07:02:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.