Not Everything That Counts Can Be Counted: A Case for Safe Qualitative AI
- URL: http://arxiv.org/abs/2511.09325v1
- Date: Thu, 13 Nov 2025 01:46:35 GMT
- Title: Not Everything That Counts Can Be Counted: A Case for Safe Qualitative AI
- Authors: Stine Beltoft, Lukas Galke,
- Abstract summary: We argue for developing dedicated qualitative AI systems built from the ground up for interpretive research.<n>We review recent literature to show how existing automated discovery pipelines could be enhanced by robust qualitative capabilities.
- Score: 2.943914288677608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) and large language models (LLM) are reshaping science, with most recent advances culminating in fully-automated scientific discovery pipelines. But qualitative research has been left behind. Researchers in qualitative methods are hesitant about AI adoption. Yet when they are willing to use AI at all, they have little choice but to rely on general-purpose tools like ChatGPT to assist with interview interpretation, data annotation, and topic modeling - while simultaneously acknowledging these system's well-known limitations of being biased, opaque, irreproducible, and privacy-compromising. This creates a critical gap: while AI has substantially advanced quantitative methods, the qualitative dimensions essential for meaning-making and comprehensive scientific understanding remain poorly integrated. We argue for developing dedicated qualitative AI systems built from the ground up for interpretive research. Such systems must be transparent, reproducible, and privacy-friendly. We review recent literature to show how existing automated discovery pipelines could be enhanced by robust qualitative capabilities, and identify key opportunities where safe qualitative AI could advance multidisciplinary and mixed-methods research.
Related papers
- Towards the Next Generation of Software: Insights from Grey Literature on AI-Native Applications [13.876049229274114]
AI-native applications are a new paradigm in software engineering that fundamentally redefines how software is designed, developed, and evolved.<n>Despite their growing prominence, AI-native applications still lack a unified engineering definition and architectural blueprint.<n>This study seeks to establish a comprehensive understanding of AI-native applications by identifying their defining characteristics, key quality attributes, and typical technology stacks.
arXiv Detail & Related papers (2025-09-16T15:01:23Z) - AI-Driven Automation Can Become the Foundation of Next-Era Science of Science Research [58.944125758758936]
The Science of Science (SoS) explores the mechanisms underlying scientific discovery.<n>The advent of artificial intelligence (AI) presents a transformative opportunity for the next generation of SoS.<n>We outline the advantages of AI over traditional methods, discuss potential limitations, and propose pathways to overcome them.
arXiv Detail & Related papers (2025-05-17T15:01:33Z) - Work in Progress: AI-Powered Engineering-Bridging Theory and Practice [0.0]
This paper explores how generative AI can help automate and improve key steps in systems engineering.<n>It examines AI's ability to analyze system requirements based on INCOSE's "good requirement" criteria.<n>The research aims to assess AI's potential to streamline engineering processes and improve learning outcomes.
arXiv Detail & Related papers (2025-02-06T17:42:00Z) - AI in the Cosmos [0.0]
I highlight examples of AI applications in astrophysics, including source classification, spectral energy distribution modeling, and discuss the achievable advancements through generative AI.<n>The use of AI introduces challenges, including biases, errors, and the "black box" nature of AI models, which must be resolved before their application.<n>These issues can be addressed through the concept of Human-Guided AI (HG-AI), which integrates human expertise and domain-specific knowledge into AI applications.
arXiv Detail & Related papers (2024-12-13T12:30:11Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Generative AI in Writing Research Papers: A New Type of Algorithmic Bias
and Uncertainty in Scholarly Work [0.38850145898707145]
Large language models (LLMs) and generative AI tools present challenges in identifying and addressing biases.
generative AI tools are susceptible to goal misgeneralization, hallucinations, and adversarial attacks such as red teaming prompts.
We find that incorporating generative AI in the process of writing research manuscripts introduces a new type of context-induced algorithmic bias.
arXiv Detail & Related papers (2023-12-04T04:05:04Z) - Evaluating General-Purpose AI with Psychometrics [43.85432514910491]
We discuss the need for a comprehensive and accurate evaluation of general-purpose AI systems such as large language models.
Current evaluation methodology, mostly based on benchmarks of specific tasks, falls short of adequately assessing these versatile AI systems.
To tackle these challenges, we suggest transitioning from task-oriented evaluation to construct-oriented evaluation.
arXiv Detail & Related papers (2023-10-25T05:38:38Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - LioNets: A Neural-Specific Local Interpretation Technique Exploiting
Penultimate Layer Information [6.570220157893279]
Interpretable machine learning (IML) is an urgent topic of research.
This paper focuses on a local-based, neural-specific interpretation process applied to textual and time-series data.
arXiv Detail & Related papers (2021-04-13T09:39:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.