Improving the Efficiency of Human-in-the-Loop Systems: Adding Artificial
to Human Experts
- URL: http://arxiv.org/abs/2307.03003v2
- Date: Fri, 7 Jul 2023 06:39:38 GMT
- Title: Improving the Efficiency of Human-in-the-Loop Systems: Adding Artificial
to Human Experts
- Authors: Johannes Jakubik, Daniel Weber, Patrick Hemmer, Michael V\"ossing,
Gerhard Satzger
- Abstract summary: We propose a hybrid system that creates artificial experts that learn to classify data instances from unknown classes.
Our approach outperforms traditional HITL systems for several benchmarks on image classification.
- Score: 0.7349727826230862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Information systems increasingly leverage artificial intelligence (AI) and
machine learning (ML) to generate value from vast amounts of data. However, ML
models are imperfect and can generate incorrect classifications. Hence,
human-in-the-loop (HITL) extensions to ML models add a human review for
instances that are difficult to classify. This study argues that continuously
relying on human experts to handle difficult model classifications leads to a
strong increase in human effort, which strains limited resources. To address
this issue, we propose a hybrid system that creates artificial experts that
learn to classify data instances from unknown classes previously reviewed by
human experts. Our hybrid system assesses which artificial expert is suitable
for classifying an instance from an unknown class and automatically assigns it.
Over time, this reduces human effort and increases the efficiency of the
system. Our experiments demonstrate that our approach outperforms traditional
HITL systems for several benchmarks on image classification.
Related papers
- Beyond human subjectivity and error: a novel AI grading system [67.410870290301]
The grading of open-ended questions is a high-effort, high-impact task in education.
Recent breakthroughs in AI technology might facilitate such automation, but this has not been demonstrated at scale.
We introduce a novel automatic short answer grading (ASAG) system.
arXiv Detail & Related papers (2024-05-07T13:49:59Z) - Human-AI Collaborative Essay Scoring: A Dual-Process Framework with LLMs [13.262711792955377]
This study explores the effectiveness of Large Language Models (LLMs) for automated essay scoring.
We propose an open-source LLM-based AES system, inspired by the dual-process theory.
We find that our system not only automates the grading process but also enhances the performance and efficiency of human graders.
arXiv Detail & Related papers (2024-01-12T07:50:10Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Quantifying Human Bias and Knowledge to guide ML models during Training [0.0]
We introduce an experimental approach to dealing with skewed datasets by including humans in the training process.
We ask humans to rank the importance of features of the dataset, and through rank aggregation, determine the initial weight bias for the model.
We show that collective human bias can allow ML models to learn insights about the true population instead of the biased sample.
arXiv Detail & Related papers (2022-11-19T20:49:07Z) - Forming Effective Human-AI Teams: Building Machine Learning Models that
Complement the Capabilities of Multiple Experts [0.0]
We propose an approach that trains a classification model to complement the capabilities of multiple human experts.
We evaluate our proposed approach in experiments on public datasets with "synthetic" experts and a real-world medical dataset annotated by multiple radiologists.
arXiv Detail & Related papers (2022-06-16T06:42:10Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Humanly Certifying Superhuman Classifiers [8.736864280782592]
Estimating the performance of a machine learning system is a longstanding challenge in artificial intelligence research.
We develop a theory for estimating the accuracy compared to the oracle, using only imperfect human annotations for reference.
Our analysis provides a simple recipe for detecting and certifying superhuman performance in this setting.
arXiv Detail & Related papers (2021-09-16T11:00:05Z) - Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease
Progression [71.7560927415706]
latent hybridisation model (LHM) integrates a system of expert-designed ODEs with machine-learned Neural ODEs to fully describe the dynamics of the system.
We evaluate LHM on synthetic data as well as real-world intensive care data of COVID-19 patients.
arXiv Detail & Related papers (2021-06-05T11:42:45Z) - A Novel Anomaly Detection Algorithm for Hybrid Production Systems based
on Deep Learning and Timed Automata [73.38551379469533]
DAD:DeepAnomalyDetection is a new approach for automatic model learning and anomaly detection in hybrid production systems.
It combines deep learning and timed automata for creating behavioral model from observations.
The algorithm has been applied to few data sets including two from real systems and has shown promising results.
arXiv Detail & Related papers (2020-10-29T08:27:43Z) - Leveraging Rationales to Improve Human Task Performance [15.785125079811902]
Given a computational system's performance exceeds that of its human user, can explainable AI capabilities be leveraged to improve the performance of the human?
We introduce the Rationale-Generating Algorithm, an automated technique for generating rationales for utility-based computational methods.
Results show that our approach produces rationales that lead to statistically significant improvement in human task performance.
arXiv Detail & Related papers (2020-02-11T04:51:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.