Performance, Opaqueness, Consequences, and Assumptions: Simple questions
for responsible planning of machine learning solutions
- URL: http://arxiv.org/abs/2208.09966v1
- Date: Sun, 21 Aug 2022 21:24:42 GMT
- Title: Performance, Opaqueness, Consequences, and Assumptions: Simple questions
for responsible planning of machine learning solutions
- Authors: Przemyslaw Biecek
- Abstract summary: We propose a quick and simple framework to support planning of AI solutions.
The POCA framework is based on four pillars: Performance, Opaqueness, Consequences and Assumptions.
- Score: 5.802346990263708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The data revolution has generated a huge demand for data-driven solutions.
This demand propels a growing number of easy-to-use tools and training for
aspiring data scientists that enable the rapid building of predictive models.
Today, weapons of math destruction can be easily built and deployed without
detailed planning and validation. This rapidly extends the list of AI failures,
i.e. deployments that lead to financial losses or even violate democratic
values such as equality, freedom and justice. The lack of planning, rules and
standards around the model development leads to the ,,anarchisation of AI".
This problem is reported under different names such as validation debt,
reproducibility crisis, and lack of explainability. Post-mortem analysis of AI
failures often reveals mistakes made in the early phase of model development or
data acquisition. Thus, instead of curing the consequences of deploying harmful
models, we shall prevent them as early as possible by putting more attention to
the initial planning stage.
In this paper, we propose a quick and simple framework to support planning of
AI solutions. The POCA framework is based on four pillars: Performance,
Opaqueness, Consequences, and Assumptions. It helps to set the expectations and
plan the constraints for the AI solution before any model is built and any data
is collected. With the help of the POCA method, preliminary requirements can be
defined for the model-building process, so that costly model misspecification
errors can be identified as soon as possible or even avoided. AI researchers,
product owners and business analysts can use this framework in the initial
stages of building AI solutions.
Related papers
- KModels: Unlocking AI for Business Applications [10.833754921830154]
This paper presents the architecture of KModels and the key decisions that shape it.
KModels enables AI consumers to eliminate the need for a dedicated data scientist.
It is highly suited for on-premise deployment but can also be used in cloud environments.
arXiv Detail & Related papers (2024-09-08T13:19:12Z) - Building AI Agents for Autonomous Clouds: Challenges and Design Principles [17.03870042416836]
AI for IT Operations (AIOps) aims to automate complex operational tasks, like fault localization and root cause analysis, thereby reducing human intervention and customer impact.
This vision paper lays the groundwork for such a framework by first framing the requirements and then discussing design decisions.
We propose AIOpsLab, a prototype implementation leveraging agent-cloud-interface that orchestrates an application, injects real-time faults using chaos engineering, and interfaces with an agent to localize and resolve the faults.
arXiv Detail & Related papers (2024-07-16T20:40:43Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Automated Process Planning Based on a Semantic Capability Model and SMT [50.76251195257306]
In research of manufacturing systems and autonomous robots, the term capability is used for a machine-interpretable specification of a system function.
We present an approach that combines these two topics: starting from a semantic capability model, an AI planning problem is automatically generated.
arXiv Detail & Related papers (2023-12-14T10:37:34Z) - Robots That Ask For Help: Uncertainty Alignment for Large Language Model
Planners [85.03486419424647]
KnowNo is a framework for measuring and aligning the uncertainty of large language models.
KnowNo builds on the theory of conformal prediction to provide statistical guarantees on task completion.
arXiv Detail & Related papers (2023-07-04T21:25:12Z) - IRJIT: A Simple, Online, Information Retrieval Approach for Just-In-Time Software Defect Prediction [10.084626547964389]
Just-in-Time software defect prediction (JIT-SDP) prevents the introduction of defects into the software by identifying them at commit check-in time.
Current software defect prediction approaches rely on manually crafted features such as change metrics and involve expensive to train machine learning or deep learning models.
We propose an approach called IRJIT that employs information retrieval on source code and labels new commits as buggy or clean based on their similarity to past buggy or clean commits.
arXiv Detail & Related papers (2022-10-05T17:54:53Z) - Data-Driven and SE-assisted AI Model Signal-Awareness Enhancement and
Introspection [61.571331422347875]
We propose a data-driven approach to enhance models' signal-awareness.
We combine the SE concept of code complexity with the AI technique of curriculum learning.
We achieve up to 4.8x improvement in model signal awareness.
arXiv Detail & Related papers (2021-11-10T17:58:18Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - Responsible AI Challenges in End-to-end Machine Learning [4.509599899042536]
Many companies that deploy AI publicly state that when training a model, we not only need to improve its accuracy, but also need to guarantee that the model does not discriminate against users.
We propose three key research directions to measure progress and introduce our ongoing research.
First, responsible AI must be deeply supported where multiple objectives like fairness and robust must be handled together.
Second, responsible AI must be broadly supported, preferably in all steps of machine learning.
arXiv Detail & Related papers (2021-01-15T04:55:03Z) - Exploring the Nuances of Designing (with/for) Artificial Intelligence [0.0]
We explore the construct of infrastructure as a means to simultaneously address algorithmic and societal issues when designing AI.
Neither algorithmic solutions, nor purely humanistic ones will be enough to fully undesirable outcomes in the narrow state of AI.
arXiv Detail & Related papers (2020-10-22T20:34:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.