Post-human interaction design, yes, but cautiously
- URL: http://arxiv.org/abs/2005.05019v1
- Date: Fri, 8 May 2020 09:17:19 GMT
- Title: Post-human interaction design, yes, but cautiously
- Authors: Jelle van Dijk
- Abstract summary: Post-human design runs the risk of obscuring the fact that AI technology imports a Cartesian humanist logic.
Instead, designers should demand of engineers to radically transform the very structure of AI technology.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Post-human design runs the risk of obscuring the fact that AI technology
actually imports a Cartesian humanist logic, which subsequently influences how
we design and conceive of so-called smart or intelligent objects. This leads to
unwanted metaphorical attributions of human qualities to smart objects.
Instead, starting from an embodied sensemaking perspective, designers should
demand of engineers to radically transform the very structure of AI technology,
in order to truly support critical posthuman values of collectivity,
relationality and community building.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI [6.8894258727040665]
We explore the interplay between human and machine intelligence, focusing on the crucial role humans play in developing ethical, responsible, and robust intelligent systems.
We propose future perspectives, capitalizing on the advantages of symbiotic designs to suggest a human-centered direction for next-generation AI development.
arXiv Detail & Related papers (2024-09-24T12:02:20Z) - Antagonistic AI [11.25562632407588]
We explore the shadow of the sycophantic paradigm, a design space we term antagonistic AI.
We consider whether antagonistic AI systems may sometimes have benefits to users, such as forcing users to confront their assumptions.
We lay out a design space for antagonistic AI, articulating potential benefits, design techniques, and methods of embedding antagonistic elements into user experience.
arXiv Detail & Related papers (2024-02-12T00:44:37Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - The Future of Artificial Intelligence (AI) and Machine Learning (ML) in
Landscape Design: A Case Study in Coastal Virginia, USA [4.149972584899897]
This paper presents a case that uses machine learning techniques to predict variables in a coastal environment.
Drawing ideas from posthumanism, this paper argues that, to truly understand the cybernetic environment, we have to take on posthumanist ethics and overcome human exceptionalism.
arXiv Detail & Related papers (2023-05-03T13:13:30Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Pathway to Future Symbiotic Creativity [76.20798455931603]
We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist to a Machine artist in its own right.
In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations.
We propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle.
arXiv Detail & Related papers (2022-08-18T15:12:02Z) - Human-Centric Artificial Intelligence Architecture for Industry 5.0
Applications [3.7890670411918252]
We propose an architecture that integrates Artificial Intelligence (Active Learning, Forecasting, Explainable Artificial Intelligence), simulated reality, decision-making, and users' feedback.
We validate it on two use cases from real-world case studies.
arXiv Detail & Related papers (2022-03-21T08:16:46Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Dynamic Cognition Applied to Value Learning in Artificial Intelligence [0.0]
Several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence.
It is of utmost importance that artificial intelligent agents have their values aligned with human values.
A possible approach to this problem would be to use theoretical models such as SED.
arXiv Detail & Related papers (2020-05-12T03:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.