Perception and Acceptance of an Autonomous Refactoring Bot
- URL: http://arxiv.org/abs/2001.02553v1
- Date: Wed, 8 Jan 2020 14:47:54 GMT
- Title: Perception and Acceptance of an Autonomous Refactoring Bot
- Authors: Marvin Wyrich, Regina Hebig, Stefan Wagner, Riccardo Scandariato
- Abstract summary: We deployed an autonomous bot for 41 days in a student software development project.
We conducted semi-structured interviews to find out how developers perceive the bot and whether they are more or less critical when reviewing the contributions of a bot compared to human contributions.
Our findings show that the bot was perceived as a useful and unobtrusive contributor, and developers were no more critical of it than they were about their human colleagues, but only a few team members felt responsible for the bot.
- Score: 11.908989544044998
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of autonomous bots for automatic support in software development
tasks is increasing. In the past, however, they were not always perceived
positively and sometimes experienced a negative bias compared to their human
counterparts. We conducted a qualitative study in which we deployed an
autonomous refactoring bot for 41 days in a student software development
project. In between and at the end, we conducted semi-structured interviews to
find out how developers perceive the bot and whether they are more or less
critical when reviewing the contributions of a bot compared to human
contributions. Our findings show that the bot was perceived as a useful and
unobtrusive contributor, and developers were no more critical of it than they
were about their human colleagues, but only a few team members felt responsible
for the bot.
Related papers
- User-Like Bots for Cognitive Automation: A Survey [4.075971633195745]
Despite the hype, bots with human user-like cognition do not currently exist.
They lack situational awareness on the digital platforms where they operate.
We discuss how cognitive architectures can contribute to creating intelligent software bots.
arXiv Detail & Related papers (2023-11-20T20:16:24Z) - Suggestion Bot: Analyzing the Impact of Automated Suggested Changes on
Code Reviews [2.773900417167691]
We created a bot called SUGGESTION BOT to automatically review the code base using GitHub's suggested changes functionality.
We evaluate SUGGESTION BOT concerning its impact on review time and also analyze whether the comments given by the bot are clear and useful for users.
arXiv Detail & Related papers (2023-05-10T17:33:43Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - You are a Bot! -- Studying the Development of Bot Accusations on Twitter [1.7626250599622473]
In the absence of ground truth data, researchers may want to tap into the wisdom of the crowd.
Our research presents the first large-scale study of bot accusations on Twitter.
It shows how the term bot became an instrument of dehumanization in social media conversations.
arXiv Detail & Related papers (2023-02-01T16:09:11Z) - Investigating the Validity of Botometer-based Social Bot Studies [0.0]
Social bots are assumed to be automated social media accounts operated by malicious actors with the goal of manipulating public opinion.
Social bot activity has been reported in many different political contexts, including the U.S. presidential elections.
We point out a fundamental theoretical flaw in the widely-used study design for estimating the prevalence of social bots.
arXiv Detail & Related papers (2022-07-23T09:31:30Z) - Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human
Supervision [72.4735163268491]
Commercial and industrial deployments of robot fleets often fall back on remote human teleoperators during execution.
We formalize the Interactive Fleet Learning (IFL) setting, in which multiple robots interactively query and learn from multiple human supervisors.
We propose Fleet-DAgger, a family of IFL algorithms, and compare a novel Fleet-DAgger algorithm to 4 baselines in simulation.
arXiv Detail & Related papers (2022-06-29T01:23:57Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Spot The Bot: A Robust and Efficient Framework for the Evaluation of
Conversational Dialogue Systems [21.36935947626793]
emphSpot The Bot replaces human-bot conversations with conversations between bots.
Human judges only annotate for each entity in a conversation whether they think it is human or not.
emphSurvival Analysis measures which bot can uphold human-like behavior the longest.
arXiv Detail & Related papers (2020-10-05T16:37:52Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Detection of Novel Social Bots by Ensembles of Specialized Classifiers [60.63582690037839]
Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion.
We show that different types of bots are characterized by different behavioral features.
We propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule.
arXiv Detail & Related papers (2020-06-11T22:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.