Documenting use cases in the affective computing domain using Unified
Modeling Language
- URL: http://arxiv.org/abs/2209.09666v1
- Date: Mon, 19 Sep 2022 10:04:18 GMT
- Title: Documenting use cases in the affective computing domain using Unified
Modeling Language
- Authors: Isabelle Hupont and Emilia Gomez
- Abstract summary: There is no standard methodology for use case documentation covering the context of use, scope, functional requirements and risks of an AI system.
Our approach builds upon an assessment of use case information needs documented in the research literature and the recently proposed European regulatory framework for AI.
From this assessment, we adopt and adapt the Unified Modeling Language (UML), which has been used in the last two decades mostly by software engineers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The study of the ethical impact of AI and the design of trustworthy systems
needs the analysis of the scenarios where AI systems are used, which is related
to the software engineering concept of "use case" and the "intended purpose"
legal term. However, there is no standard methodology for use case
documentation covering the context of use, scope, functional requirements and
risks of an AI system. In this work, we propose a novel documentation
methodology for AI use cases, with a special focus on the affective computing
domain. Our approach builds upon an assessment of use case information needs
documented in the research literature and the recently proposed European
regulatory framework for AI. From this assessment, we adopt and adapt the
Unified Modeling Language (UML), which has been used in the last two decades
mostly by software engineers. Each use case is then represented by an UML
diagram and a structured table, and we provide a set of examples illustrating
its application to several affective computing scenarios.
Related papers
- Prompting Encoder Models for Zero-Shot Classification: A Cross-Domain Study in Italian [75.94354349994576]
This paper explores the feasibility of employing smaller, domain-specific encoder LMs alongside prompting techniques to enhance performance in specialized contexts.
Our study concentrates on the Italian bureaucratic and legal language, experimenting with both general-purpose and further pre-trained encoder-only models.
The results indicate that while further pre-trained models may show diminished robustness in general knowledge, they exhibit superior adaptability for domain-specific tasks, even in a zero-shot setting.
arXiv Detail & Related papers (2024-07-30T08:50:16Z) - Implicit Personalization in Language Models: A Systematic Study [94.29756463158853]
Implicit Personalization (IP) is a phenomenon of language models inferring a user's background from the implicit cues in the input prompts.
This work systematically studies IP through a rigorous mathematical formulation, a multi-perspective moral reasoning framework, and a set of case studies.
arXiv Detail & Related papers (2024-05-23T17:18:46Z) - Learning Phonotactics from Linguistic Informants [54.086544221761486]
Our model iteratively selects or synthesizes a data-point according to one of a range of information-theoretic policies.
We find that the information-theoretic policies that our model uses to select items to query the informant achieve sample efficiency comparable to, or greater than, fully supervised approaches.
arXiv Detail & Related papers (2024-05-08T00:18:56Z) - Use case cards: a use case reporting framework inspired by the European
AI Act [0.0]
We propose a new framework for the documentation of use cases, that we call "use case cards"
Unlike other documentation methodologies, we focus on the purpose and operational use of an AI system.
The proposed framework is the result of a co-design process involving a relevant team of EU policy experts and scientists.
arXiv Detail & Related papers (2023-06-23T15:47:19Z) - Dynamic Documentation for AI Systems [0.0]
We show the limits of present documentation protocols for AI systems.
We argue for dynamic documentation as a new paradigm for understanding and evaluating AI systems.
arXiv Detail & Related papers (2023-03-20T04:23:07Z) - Evaluating a Methodology for Increasing AI Transparency: A Case Study [8.265282762929509]
Given growing concerns about the potential harms of artificial intelligence, societies have begun to demand more transparency about how AI models and systems are created and used.
To address these concerns, several efforts have proposed documentation templates containing questions to be answered by model developers.
No single template can cover the needs of diverse documentation consumers.
arXiv Detail & Related papers (2022-01-24T20:01:01Z) - Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation [63.18666008322476]
Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
arXiv Detail & Related papers (2021-04-09T01:54:58Z) - Multisource AI Scorecard Table for System Evaluation [3.74397577716445]
The paper describes a Multisource AI Scorecard Table (MAST) that provides the developer and user of an artificial intelligence (AI)/machine learning (ML) system with a standard checklist.
The paper explores how the analytic tradecraft standards outlined in Intelligence Community Directive (ICD) 203 can provide a framework for assessing the performance of an AI system.
arXiv Detail & Related papers (2021-02-08T03:37:40Z) - Why model why? Assessing the strengths and limitations of LIME [0.0]
This paper examines the effectiveness of the Local Interpretable Model-Agnostic Explanations (LIME) xAI framework.
LIME is one of the most popular model agnostic frameworks found in the literature.
We show how LIME can be used to supplement conventional performance assessment methods.
arXiv Detail & Related papers (2020-11-30T21:08:07Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z) - A Methodology for Creating AI FactSheets [67.65802440158753]
This paper describes a methodology for creating the form of AI documentation we call FactSheets.
Within each step of the methodology, we describe the issues to consider and the questions to explore.
This methodology will accelerate the broader adoption of transparent AI documentation.
arXiv Detail & Related papers (2020-06-24T15:08:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.