Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking
- URL: http://arxiv.org/abs/2505.23930v1
- Date: Thu, 29 May 2025 18:24:34 GMT
- Title: Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking
- Authors: Naomi Omeonga wa Kayembe,
- Abstract summary: This study introduces a novel conceptual framework distinguishing problem-seeking from problem-solving to clarify the unique features of human intelligence in contrast to AI.<n>The framework emphasizes that while AI excels at efficiency and optimization, it lacks the orientation derived from grounding and the embodiment flexibility intrinsic to human cognition.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study introduces a novel conceptual framework distinguishing problem-seeking from problem-solving to clarify the unique features of human intelligence in contrast to AI. Problem-seeking refers to the embodied, emotionally grounded process by which humans identify and set goals, while problem-solving denotes the execution of strategies aimed at achieving such predefined objectives. The framework emphasizes that while AI excels at efficiency and optimization, it lacks the orientation derived from experiential grounding and the embodiment flexibility intrinsic to human cognition. To empirically explore this distinction, the research analyzes metadata from 157 YouTube videos discussing AI. Conducting a thematic analysis combining qualitative insights with keyword-based quantitative metrics, this mixed-methods approach uncovers recurring themes in public discourse, including privacy, job displacement, misinformation, optimism, and ethical concerns. The results reveal a dual sentiment: public fascination with AI's capabilities coexists with anxiety and skepticism about its societal implications. The discussion critiques the orthogonality thesis, which posits that intelligence is separable from goal content, and instead argues that human intelligence integrates goal-setting and goal-pursuit. It underscores the centrality of embodied cognition in human reasoning and highlights how AI's limitations come from its current reliance on computational processing. The study advocates for enhancing emotional and digital literacy to foster responsible AI engagement. It calls for reframing public discourse to recognize AI as a tool that augments -- rather than replaces -- human intelligence. By positioning problem seeking at the core of cognition and as a critical dimension of intelligence, this research offers new perspectives on ethically aligned and human-centered AI development.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.<n>We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.<n>Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps [3.8214695776749013]
This study conducts a comprehensive bibliometric analysis of the AI ethics literature over the past two decades.
They present seven key AI ethics issues, encompassing the Collingridge dilemma, the AI status debate, challenges associated with AI transparency and explainability, privacy protection complications, considerations of justice and fairness, concerns about algocracy and human enfeeblement, and the issue of superintelligence.
arXiv Detail & Related papers (2024-03-12T21:43:21Z) - Analyzing Character and Consciousness in AI-Generated Social Content: A
Case Study of Chirper, the AI Social Network [0.0]
The study embarks on a comprehensive exploration of AI behavior, analyzing the effects of diverse settings on Chirper's responses.
Through a series of cognitive tests, the study gauges the self-awareness and pattern recognition prowess of Chirpers.
An intriguing aspect of the research is the exploration of the potential influence of a Chirper's handle or personality type on its performance.
arXiv Detail & Related papers (2023-08-30T15:40:18Z) - FATE in AI: Towards Algorithmic Inclusivity and Accessibility [0.0]
To prevent algorithmic disparities, fairness, accountability, transparency, and ethics (FATE) in AI are being implemented.
This study examines FATE-related desiderata, particularly transparency and ethics, in areas of the global South that are underserved by AI.
To promote inclusivity, a community-led strategy is proposed to collect and curate representative data for responsible AI design.
arXiv Detail & Related papers (2023-01-03T15:08:10Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Relational Artificial Intelligence [5.5586788751870175]
Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective.
A rational approach to AI, where computational algorithms drive decision making independent of human intervention, has shown to result in bias and exclusion.
A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI.
arXiv Detail & Related papers (2022-02-04T15:29:57Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Thinking Fast and Slow in AI [38.8581204791644]
This paper proposes a research direction to advance AI which draws inspiration from cognitive theories of human decision making.
The premise is that if we gain insights about the causes of some human capabilities that are still lacking in AI, we may obtain similar capabilities in an AI system.
arXiv Detail & Related papers (2020-10-12T20:10:05Z) - Dynamic Cognition Applied to Value Learning in Artificial Intelligence [0.0]
Several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence.
It is of utmost importance that artificial intelligent agents have their values aligned with human values.
A possible approach to this problem would be to use theoretical models such as SED.
arXiv Detail & Related papers (2020-05-12T03:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.