Perceived Usability of Collaborative Modeling Tools
- URL: http://arxiv.org/abs/2408.14088v1
- Date: Mon, 26 Aug 2024 08:19:23 GMT
- Title: Perceived Usability of Collaborative Modeling Tools
- Authors: Ranci Ren, John W. Castro, Santiago R. Acuña, Oscar Dieste, Silvia T. Acuña,
- Abstract summary: We compare the perceived usability of two similarly online collaborative modeling tools.
We performed a quantitative and qualitative exploration, employing inferential statistics and thematic analysis.
- Score: 0.1957338076370071
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Context: Online collaborative creation of models is becoming commonplace. Collaborative modeling using chatbots and natural language may lower the barriers to modeling for users from different domains. Objective: We compare the perceived usability of two similarly online collaborative modeling tools, the SOCIO chatbot and the Creately web-based tool. Method: We conducted a crossover experiment with 66 participants. The evaluation instrument was based on the System Usability Scale (SUS). We performed a quantitative and qualitative exploration, employing inferential statistics and thematic analysis. Results: The results indicate that chatbots enabling natural language communication enhance communication and collaboration efficiency and improve the user experience. Conclusion: Chatbots need to improve guidance and help for novices, but they appear beneficial for enhancing user experience.
Related papers
- Using the SOCIO Chatbot for UML Modelling: A Family of Experiments [0.1957338076370071]
We compare the usability of SOCIO for collaborative modelling (i.e., SOCIO) and an online web tool (i.e., Creately) in academic settings.
The student participants were faster at building class diagrams using SOCIO than with the online collaborative tool.
Our study has helped us to shed light on the future direction for experimentation in this field.
arXiv Detail & Related papers (2024-08-26T08:12:11Z) - BotEval: Facilitating Interactive Human Evaluation [21.99269491969255]
BotEval is an evaluation toolkit that enables human-bot interactions as part of the evaluation process.
We develop BotEval, an easily customizable, open-source, evaluation toolkit that focuses on enabling human-bot interactions as part of the evaluation process.
arXiv Detail & Related papers (2024-07-25T04:57:31Z) - A Transformer-based Approach for Augmenting Software Engineering Chatbots Datasets [4.311626046942916]
We present an automated transformer-based approach to augment software engineering datasets.
We evaluate the impact of using the augmentation approach on the Rasa NLU's performance using three software engineering datasets.
arXiv Detail & Related papers (2024-07-16T17:48:44Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Harnessing the Power of Large Language Models for Empathetic Response Generation: Empirical Investigations and Improvements [28.630542719519855]
This work empirically investigates the performance of large language models (LLMs) in generating empathetic responses.
Extensive experiments show that LLMs can significantly benefit from our proposed methods and is able to achieve state-of-the-art performance in both automatic and human evaluations.
arXiv Detail & Related papers (2023-10-08T12:21:24Z) - ChatDev: Communicative Agents for Software Development [84.90400377131962]
ChatDev is a chat-powered software development framework in which specialized agents are guided in what to communicate.
These agents actively contribute to the design, coding, and testing phases through unified language-based communication.
arXiv Detail & Related papers (2023-07-16T02:11:34Z) - Unlocking the Potential of User Feedback: Leveraging Large Language
Model as User Simulator to Enhance Dialogue System [65.93577256431125]
We propose an alternative approach called User-Guided Response Optimization (UGRO) to combine it with a smaller task-oriented dialogue model.
This approach uses LLM as annotation-free user simulator to assess dialogue responses, combining them with smaller fine-tuned end-to-end TOD models.
Our approach outperforms previous state-of-the-art (SOTA) results.
arXiv Detail & Related papers (2023-06-16T13:04:56Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - A Role-Selected Sharing Network for Joint Machine-Human Chatting Handoff
and Service Satisfaction Analysis [35.937850808046456]
We propose a novel model, Role-Selected Sharing Network ( RSSN), which integrates dialogue satisfaction estimation and handoff prediction in one multi-task learning framework.
Unlike prior efforts in dialog mining, by utilizing local user satisfaction as a bridge, global satisfaction detector and handoff predictor can effectively exchange critical information.
arXiv Detail & Related papers (2021-09-17T08:39:45Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.