Continuous Experimentation and Human Factors An Exploratory Study
- URL: http://arxiv.org/abs/2311.00560v1
- Date: Wed, 1 Nov 2023 14:56:33 GMT
- Title: Continuous Experimentation and Human Factors An Exploratory Study
- Authors: Amna Pir Muhammad, Eric Knauss, Jonas B\"argman, and Alessia Knauss
- Abstract summary: The success of tools and systems relies heavily on their ability to meet the needs and expectations of users.
User-centered design approaches, with a focus on human factors, have gained increasing attention as they prioritize the human element in the development process.
- Score: 4.419836325434071
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In todays rapidly evolving technological landscape, the success of tools and
systems relies heavily on their ability to meet the needs and expectations of
users. User-centered design approaches, with a focus on human factors, have
gained increasing attention as they prioritize the human element in the
development process. With the increasing complexity of software-based systems,
companies are adopting agile development methodologies and emphasizing
continuous software experimentation. However, there is limited knowledge on how
to effectively execute continuous experimentation with respect to human factors
within this context. This research paper presents an exploratory qualitative
study for integrating human factors in continuous experimentation, aiming to
uncover distinctive characteristics of human factors and continuous software
experiments, practical challenges for integrating human factors in continuous
software experiments, and best practices associated with the management of
continuous human factors experimentation.
Related papers
- Unveiling the Role of Expert Guidance: A Comparative Analysis of User-centered Imitation Learning and Traditional Reinforcement Learning [0.0]
This study explores the performance, robustness, and limitations of imitation learning compared to traditional reinforcement learning methods.
The insights gained from this study contribute to the advancement of human-centered artificial intelligence.
arXiv Detail & Related papers (2024-10-28T18:07:44Z) - Vital Insight: Assisting Experts' Context-Driven Sensemaking of Multi-modal Personal Tracking Data Using Visualization and Human-In-The-Loop LLM Agents [29.73055078727462]
Vital Insight is a novel, LLM-assisted, prototype system to enable human-in-the-loop inference (sensemaking) and visualizations of multi-modal passive sensing data from smartphones and wearables.
We observe experts' interactions with it and develop an expert sensemaking model that explains how experts move between direct data representations and AI-supported inferences.
arXiv Detail & Related papers (2024-10-18T21:56:35Z) - ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models [53.00812898384698]
We argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking.
We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert.
We propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars -- Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.
arXiv Detail & Related papers (2024-05-28T22:45:28Z) - Developing and Evaluating a Design Method for Positive Artificial
Intelligence [0.6138671548064356]
Development of "AI for good" poses challenges around aligning systems with complex human values.
This article presents and evaluates the Positive AI design method aimed at addressing this gap.
The method provides a human-centered process to translate wellbeing aspirations into concrete practices.
arXiv Detail & Related papers (2024-02-02T15:31:08Z) - Interactive Multi-Objective Evolutionary Optimization of Software
Architectures [0.0]
Putting the human in the loop brings new challenges to the search-based software engineering field.
This paper explores how the interactive evolutionary computation can serve as a basis for integrating the human's judgment into the search process.
arXiv Detail & Related papers (2024-01-08T19:15:40Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - A Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems [128.63953314853327]
"Lifelong Learning" systems are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability.
We show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems.
arXiv Detail & Related papers (2023-01-18T21:58:54Z) - Taxonomy of A Decision Support System for Adaptive Experimental Design
in Field Robotics [19.474298062145003]
We propose a Decision Support System (DSS) to amplify the human's decision-making abilities and enable principled decision-making in field experiments.
We construct and present our taxonomy using examples and trends from DSS literature, including works involving artificial intelligence and Intelligent DSSs.
arXiv Detail & Related papers (2022-10-15T23:28:30Z) - L2Explorer: A Lifelong Reinforcement Learning Assessment Environment [49.40779372040652]
Reinforcement learning solutions tend to generalize poorly when exposed to new tasks outside of the data distribution they are trained on.
We introduce a framework for continual reinforcement-learning development and assessment using Lifelong Learning Explorer (L2Explorer)
L2Explorer is a new, Unity-based, first-person 3D exploration environment that can be continuously reconfigured to generate a range of tasks and task variants structured into complex evaluation curricula.
arXiv Detail & Related papers (2022-03-14T19:20:26Z) - Lifelong Learning Metrics [63.8376359764052]
The DARPA Lifelong Learning Machines (L2M) program seeks to yield advances in artificial intelligence (AI) systems.
This document outlines a formalism for constructing and characterizing the performance of agents performing lifelong learning scenarios.
arXiv Detail & Related papers (2022-01-20T16:29:14Z) - Assessing Human Interaction in Virtual Reality With Continually Learning
Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study [6.076137037890219]
We investigate how the interaction between a human and a continually learning prediction agent develops as the agent develops competency.
We develop a virtual reality environment and a time-based prediction task wherein learned predictions from a reinforcement learning (RL) algorithm augment human predictions.
Our findings suggest that human trust of the system may be influenced by early interactions with the agent, and that trust in turn affects strategic behaviour.
arXiv Detail & Related papers (2021-12-14T22:46:44Z) - Scaling up Search Engine Audits: Practical Insights for Algorithm
Auditing [68.8204255655161]
We set up experiments for eight search engines with hundreds of virtual agents placed in different regions.
We demonstrate the successful performance of our research infrastructure across multiple data collections.
We conclude that virtual agents are a promising venue for monitoring the performance of algorithms across long periods of time.
arXiv Detail & Related papers (2021-06-10T15:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.