Developing and Evaluating a Design Method for Positive Artificial
Intelligence
- URL: http://arxiv.org/abs/2402.01499v2
- Date: Mon, 4 Mar 2024 12:52:13 GMT
- Title: Developing and Evaluating a Design Method for Positive Artificial
Intelligence
- Authors: Willem van der Maden, Derek Lomas, Paul Hekkert
- Abstract summary: Development of "AI for good" poses challenges around aligning systems with complex human values.
This article presents and evaluates the Positive AI design method aimed at addressing this gap.
The method provides a human-centered process to translate wellbeing aspirations into concrete practices.
- Score: 0.6138671548064356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As artificial intelligence (AI) continues advancing, ensuring positive
societal impacts becomes critical, especially as AI systems become increasingly
ubiquitous in various aspects of life. However, developing "AI for good" poses
substantial challenges around aligning systems with complex human values.
Presently, we lack mature methods for addressing these challenges. This article
presents and evaluates the Positive AI design method aimed at addressing this
gap. The method provides a human-centered process to translate wellbeing
aspirations into concrete practices. First, we explain the method's four key
steps: contextualizing, operationalizing, optimizing, and implementing
wellbeing supported by continuous measurement for feedback cycles. We then
present a multiple case study where novice designers applied the method,
revealing strengths and weaknesses related to efficacy and usability. Next, an
expert evaluation study assessed the quality of the resulting concepts, rating
them moderately high for feasibility, desirability, and plausibility of
achieving intended wellbeing benefits. Together, these studies provide
preliminary validation of the method's ability to improve AI design, while
surfacing areas needing refinement like developing support for complex steps.
Proposed adaptations such as examples and evaluation heuristics could address
weaknesses. Further research should examine sustained application over multiple
projects. This human-centered approach shows promise for realizing the vision
of 'AI for Wellbeing' that does not just avoid harm, but actively benefits
humanity.
Related papers
- Evaluating AI Evaluation: Perils and Prospects [8.086002368038658]
This paper contends that the prevalent evaluation methods for these systems are fundamentally inadequate.
I argue that a reformation is required in the way we evaluate AI systems and that we should look towards cognitive sciences for inspiration.
arXiv Detail & Related papers (2024-07-12T12:37:13Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Human-Centered AI Product Prototyping with No-Code AutoML: Conceptual Framework, Potentials and Limitations [0.0]
This paper focuses on the challenges posed by the probabilistic nature of AI behavior and the limited accessibility of prototyping tools to non-experts.
A Design Science Research (DSR) approach is presented which culminates in a conceptual framework aimed at improving the AI prototyping process.
The framework describes the seamless incorporation of non-expert input and evaluation during prototyping, leveraging the potential of no-code AutoML to enhance accessibility and interpretability.
arXiv Detail & Related papers (2024-02-06T16:00:32Z) - Evaluating General-Purpose AI with Psychometrics [43.85432514910491]
We discuss the need for a comprehensive and accurate evaluation of general-purpose AI systems such as large language models.
Current evaluation methodology, mostly based on benchmarks of specific tasks, falls short of adequately assessing these versatile AI systems.
To tackle these challenges, we suggest transitioning from task-oriented evaluation to construct-oriented evaluation.
arXiv Detail & Related papers (2023-10-25T05:38:38Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Predictable Artificial Intelligence [67.79118050651908]
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
This paper aims to elucidate the questions, hypotheses and challenges relevant to Predictable AI.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Methodological reflections for AI alignment research using human
feedback [0.0]
AI alignment aims to investigate whether AI technologies align with human interests and values and function in a safe and ethical manner.
LLMs have the potential to exhibit unintended behavior due to their ability to learn and adapt in ways that are difficult to predict.
arXiv Detail & Related papers (2022-12-22T14:27:33Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - A.I. Robustness: a Human-Centered Perspective on Technological
Challenges and Opportunities [8.17368686298331]
Robustness of Artificial Intelligence (AI) systems remains elusive and constitutes a key issue that impedes large-scale adoption.
We introduce three concepts to organize and describe the literature both from a fundamental and applied point of view.
We highlight the central role of humans in evaluating and enhancing AI robustness, considering the necessary knowledge humans can provide.
arXiv Detail & Related papers (2022-10-17T10:00:51Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Towards Understanding the Adversarial Vulnerability of Skeleton-based
Action Recognition [133.35968094967626]
Skeleton-based action recognition has attracted increasing attention due to its strong adaptability to dynamic circumstances.
With the help of deep learning techniques, it has also witnessed substantial progress and currently achieved around 90% accuracy in benign environment.
Research on the vulnerability of skeleton-based action recognition under different adversarial settings remains scant.
arXiv Detail & Related papers (2020-05-14T17:12:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.